Schwarz and multilevel methods for quadratic spline collocation
Energy Technology Data Exchange (ETDEWEB)
Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
B-spline Collocation with Domain Decomposition Method
International Nuclear Information System (INIS)
Hidayat, M I P; Parman, S; Ariwahjoedi, B
2013-01-01
A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Energy Technology Data Exchange (ETDEWEB)
Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
An efficient approach to numerical study of the coupled-BBM system with B-spline collocation method
Directory of Open Access Journals (Sweden)
khalid ali
2016-11-01
Full Text Available In the present paper, a numerical method is proposed for the numerical solution of a coupled-BBM system with appropriate initial and boundary conditions by using collocation method with cubic trigonometric B-spline on the uniform mesh points. The method is shown to be unconditionally stable using von-Neumann technique. To test accuracy the error norms2L, ?L are computed. Furthermore, interaction of two and three solitary waves are used to discuss the effect of the behavior of the solitary waves after the interaction. These results show that the technique introduced here is easy to apply. We make linearization for the nonlinear term.
A fractional spline collocation-Galerkin method for the time-fractional diffusion equation
Directory of Open Access Journals (Sweden)
Pezza L.
2018-03-01
Full Text Available The aim of this paper is to numerically solve a diffusion differential problem having time derivative of fractional order. To this end we propose a collocation-Galerkin method that uses the fractional splines as approximating functions. The main advantage is in that the derivatives of integer and fractional order of the fractional splines can be expressed in a closed form that involves just the generalized finite difference operator. This allows us to construct an accurate and efficient numerical method. Several numerical tests showing the effectiveness of the proposed method are presented.
A fourth order spline collocation approach for a business cycle model
Sayfy, A.; Khoury, S.; Ibdah, H.
2013-10-01
A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.
Spline Collocation Method for Nonlinear Multi-Term Fractional Differential Equation
Choe, Hui-Chol; Kang, Yong-Suk
2013-01-01
We study an approximation method to solve nonlinear multi-term fractional differential equations with initial conditions or boundary conditions. First, we transform the nonlinear multi-term fractional differential equations with initial conditions and boundary conditions to nonlinear fractional integral equations and consider the relations between them. We present a Spline Collocation Method and prove the existence, uniqueness and convergence of approximate solution as well as error estimatio...
Spline smoothing of histograms by linear programming
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Piecewise linear regression splines with hyperbolic covariates
International Nuclear Information System (INIS)
Cologne, John B.; Sposto, Richard
1992-09-01
Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)
Directory of Open Access Journals (Sweden)
Imtiaz Wasim
2018-01-01
Full Text Available In this study, we introduce a new numerical technique for solving nonlinear generalized Burgers-Fisher and Burgers-Huxley equations using hybrid B-spline collocation method. This technique is based on usual finite difference scheme and Crank-Nicolson method which are used to discretize the time derivative and spatial derivatives, respectively. Furthermore, hybrid B-spline function is utilized as interpolating functions in spatial dimension. The scheme is verified unconditionally stable using the Von Neumann (Fourier method. Several test problems are considered to check the accuracy of the proposed scheme. The numerical results are in good agreement with known exact solutions and the existing schemes in literature.
Ophaug, Vegard; Gerlach, Christian
2017-11-01
This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.
SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1
Directory of Open Access Journals (Sweden)
Liviu GEAMBAŞU
2010-12-01
Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange
Solutions of First-Order Volterra Type Linear Integrodifferential Equations by Collocation Method
Directory of Open Access Journals (Sweden)
Olumuyiwa A. Agbolade
2017-01-01
Full Text Available The numerical solutions of linear integrodifferential equations of Volterra type have been considered. Power series is used as the basis polynomial to approximate the solution of the problem. Furthermore, standard and Chebyshev-Gauss-Lobatto collocation points were, respectively, chosen to collocate the approximate solution. Numerical experiments are performed on some sample problems already solved by homotopy analysis method and finite difference methods. Comparison of the absolute error is obtained from the present method and those from aforementioned methods. It is also observed that the absolute errors obtained are very low establishing convergence and computational efficiency.
A modified linear algebraic approach to electron scattering using cubic splines
International Nuclear Information System (INIS)
Kinney, R.A.
1986-01-01
A modified linear algebraic approach to the solution of the Schrodiner equation for low-energy electron scattering is presented. The method uses a piecewise cubic-spline approximation of the wavefunction. Results in the static-potential and the static-exchange approximations for e - +H s-wave scattering are compared with unmodified linear algebraic and variational linear algebraic methods. (author)
Directory of Open Access Journals (Sweden)
Salih Yalcinbas
2016-01-01
Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.
B-spline solution of a singularly perturbed boundary value problem arising in biology
International Nuclear Information System (INIS)
Lin Bin; Li Kaitai; Cheng Zhengxing
2009-01-01
We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19
Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D
2013-07-01
Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.
About the Modeling of Radio Source Time Series as Linear Splines
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-12-01
Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.
Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A
2016-10-01
Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.
Spline methods for conversation equations
International Nuclear Information System (INIS)
Bottcher, C.; Strayer, M.R.
1991-01-01
The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs
A spline-based non-linear diffeomorphism for multimodal prostate registration.
Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice
2012-08-01
This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Farrell, Patricio
2015-04-30
© 2015John Wiley & Sons, Ltd. Symmetric collocation methods with RBFs allow approximation of the solution of a partial differential equation, even if the right-hand side is only known at scattered data points, without needing to generate a grid. However, the benefit of a guaranteed symmetric positive definite block system comes at a high computational cost. This cost can be alleviated somewhat by considering compactly supported RBFs and a multiscale technique. But the condition number and sparsity will still deteriorate with the number of data points. Therefore, we study certain block diagonal and triangular preconditioners. We investigate ideal preconditioners and determine the spectra of the preconditioned matrices before proposing more practical preconditioners based on a restricted additive Schwarz method with coarse grid correction. Numerical results verify the effectiveness of the preconditioners.
Directory of Open Access Journals (Sweden)
E. D. Resende
2007-09-01
Full Text Available The freezing process is considered as a propagation problem and mathematically classified as an "initial value problem." The mathematical formulation involves a complex situation of heat transfer with simultaneous changes of phase and abrupt variation in thermal properties. The objective of the present work is to solve the non-linear heat transfer equation for food freezing processes using orthogonal collocation on finite elements. This technique has not yet been applied to freezing processes and represents an alternative numerical approach in this area. The results obtained confirmed the good capability of the numerical method, which allows the simulation of the freezing process in approximately one minute of computer time, qualifying its application in a mathematical optimising procedure. The influence of the latent heat released during the crystallisation phenomena was identified by the significant increase in heat load in the early stages of the freezing process.
SPLINE, Spline Interpolation Function
International Nuclear Information System (INIS)
Allouard, Y.
1977-01-01
1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10
Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's
Cai, Wei; Wang, Jian-Zhong
1993-01-01
We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Revier, Robert Lee; Henriksen, Birgit
2006-01-01
Very little pedadagoy has been made available to teachers interested in teaching collocations in foreign and/or second language classroom. This paper aims to contribute to and promote efforts in developing L2-based pedagogy for the teaching of phraseology. To this end, it presents pedagogical...
Comments on the comparison of global methods for linear two-point boundary value problems
International Nuclear Information System (INIS)
de Boor, C.; Swartz, B.
1977-01-01
A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials
The basis spline method and associated techniques
International Nuclear Information System (INIS)
Bottcher, C.; Strayer, M.R.
1989-01-01
We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs
International Nuclear Information System (INIS)
Gao Wen-Wu; Wang Zhi-Gang
2014-01-01
Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Measuring receptive collocational competence across proficiency levels
Directory of Open Access Journals (Sweden)
Déogratias Nizonkiza
2015-12-01
Full Text Available The present study investigates, (i English as Foreign Language (EFL learners’ receptive collocational knowledge growth in relation to their linguistic proficiency level; (ii how much receptive collocational knowledge is acquired as proficiency develops; and (iii the extent to which receptive knowledge of collocations of EFL learners varies across word frequency bands. A proficiency measure and a collocation test were administered to English majors at the University of Burundi. Results of the study suggest that receptive collocational competence develops alongside EFL learners’ linguistic proficiency; which lends empirical support to Gyllstad (2007, 2009 and Author (2011 among others, who reported similar findings. Furthermore, EFL learners’ collocations growth seems to be quantifiable wherein both linguistic proficiency level and word frequency occupy a crucial role. While more gains in terms of collocations that EFL learners could potentially add as a result of change in proficiency are found at lower levels of proficiency; collocations of words from more frequent word bands seem to be mastered first, and more gains are found at more frequent word bands. These results confirm earlier findings on the non-linearity nature of vocabulary growth (cf. Meara 1996 and the fundamental role played by frequency in word knowledge for vocabulary in general (Nation 1983, 1990, Nation and Beglar 2007, which are extended here to collocations knowledge.
Directory of Open Access Journals (Sweden)
Scott W. Keith
2014-09-01
Full Text Available This paper details the design, evaluation, and implementation of a framework for detecting and modeling nonlinearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and nonparametric bootstrapping. Unlike other nonlinear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI; kg/m2 and the complex multi-stage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic nonlinear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and nonlinearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC and Akaike’s Information Criterion (AIC, in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions.
International Nuclear Information System (INIS)
Schmidt, R.
1976-12-01
This report contains a short introduction to spline functions as well as a complete description of the spline procedures presently available in the HMI-library. These include polynomial splines (using either B-splines or one-sided basis representations) and natural splines, as well as their application to interpolation, quasiinterpolation, L 2 -, and Tchebycheff approximation. Special procedures are included for the case of cubic splines. Complete test examples with input and output are provided for each of the procedures. (orig.) [de
Proxemic Mobile Collocated Interactions
DEFF Research Database (Denmark)
Porcheron, Martin; Lucero, Andrés; Quigley, Aaron
2016-01-01
and their digital devices (i.e. the proxemic relationships). Building on the ideas of proxemic interactions, this workshop is motivated by the concept of ‘proxemic mobile collocated interactions’, to harness new or existing technologies to create engaging and interactionally relevant experiences. Such approaches......Recent research on mobile collocated interactions has been looking at situations in which collocated users engage in collaborative activities using their mobile devices. However, existing practices fail to fully account for the culturally-dependent spatial relationships between people...... in exploring proxemics and mobile collocated interactions....
Splines and variational methods
Prenter, P M
2008-01-01
One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension
International Nuclear Information System (INIS)
Aleksandrov, L.; Drenska, M.; Karadzhov, D.
1986-01-01
A generalization of the core spline method is given in the case of solution of the general bound state problem for a system of M linear differential equations with coefficients depending on the spectral parameter. The recursion scheme for construction of basic splines is described. The wave functions are expressed as linear combinations of basic splines, which are approximate partial solutions of the system. The spectral parameter (the eigenvalue) is determined from the condition for existence of a nontrivial solution of a (MxM) linear algebraic system at the last collocation point. The nontrivial solutions of this system determine (M - 1) coefficients of the linear spans, expressing the wave functions. The last unknown coefficient is determined from a boundary (or normalization) condition for the system. The computational aspects of the method are discussed, in particular, its concrete algorithmic realization used in the RODSOL program. The numerical solution of the Dirac system for the bound states of a hydrogen atom is given is an example
I. Kuba; J. Zavacky; J. Mihalik
1995-01-01
This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.
Rectangular spectral collocation
Driscoll, Tobin A.; Hale, Nicholas
2015-01-01
Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon
Mobile Collocated Interactions
DEFF Research Database (Denmark)
Lucero, Andrés; Clawson, James; Lyons, Kent
2015-01-01
Mobile devices such as smartphones and tablets were originally conceived and have traditionally been utilized for individual use. Research on mobile collocated interactions has been looking at situations in which collocated users engage in collaborative activities using their mobile devices, thus...... going from personal/individual toward shared/multiuser experiences and interactions. However, computers are getting smaller, more powerful, and closer to our bodies. Therefore, mobile collocated interactions research, which originally looked at smartphones and tablets, will inevitably include ever......-smaller computers, ones that can be worn on our wrists or other parts of the body. The focus of this workshop is to bring together a community of researchers, designers and practitioners to explore the potential of extending mobile collocated interactions to the use of wearable devices....
Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y
2008-02-18
The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.
Directory of Open Access Journals (Sweden)
Hannu Olkkonen
2013-01-01
Full Text Available In this work we introduce a new family of splines termed as gamma splines for continuous signal approximation and multiresolution analysis. The gamma splines are born by -times convolution of the exponential by itself. We study the properties of the discrete gamma splines in signal interpolation and approximation. We prove that the gamma splines obey the two-scale equation based on the polyphase decomposition. to introduce the shift invariant gamma spline wavelet transform for tree structured subscale analysis of asymmetric signal waveforms and for systems with asymmetric impulse response. Especially we consider the applications in biomedical signal analysis (EEG, ECG, and EMG. Finally, we discuss the suitability of the gamma spline signal processing in embedded VLSI environment.
Geostationary satellites collocation
Li, Hengnian
2014-01-01
Geostationary Satellites Collocation aims to find solutions for deploying a safe and reliable collocation control. Focusing on the orbital perturbation analysis, the mathematical foundations for orbit and control of the geostationary satellite are summarized. The mathematical and physical principle of orbital maneuver and collocation strategies for multi geostationary satellites sharing with the same dead band is also stressed. Moreover, the book presents some applications using the above algorithms and mathematical models to help readers master the corrective method for planning station keeping maneuvers. Engineers and scientists in the fields of aerospace technology and space science can benefit from this book. Hengnian Li is the Deputy Director of State Key Laboratory of Astronautic Dynamics, China.
Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello
2014-11-05
Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (Pbisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.
Knott, Gary D
2000-01-01
A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...
APLIKASI SPLINE ESTIMATOR TERBOBOT
Directory of Open Access Journals (Sweden)
I Nyoman Budiantara
2001-01-01
Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2, ,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2, ,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.
Collocations in Marine Engineering English
Directory of Open Access Journals (Sweden)
Mirjana Borucinsky
2016-05-01
Full Text Available Collocations are very frequent in the English language (Hill, 2000, and they are probably the most common and most representative of English multi-word expressions (Lewis, 2000. Furthermore, as a subset of formulaic sequences, collocations are considered to be a central aspect of communicative competence (Nation, 2001. Hence, the importance of teaching collocations in General English (GE as well as in English for Specific Purposes (ESP is undeniable. Understanding and determining the relevant collocations and their mastery are of “utmost importance to a ME instructor” (Cole et al., 2007, p. 137, and collocations are one of the most productive ways of enriching vocabulary and terminology in modern ME. Vişan & Georgescu (2011 have undertaken a relevant study on collocations and “collocational competence” on board ships, including mostly nautical terminology. However, no substantial work on collocations in Marine Engineering English as a sub-register of ME has been carried out. Hence, this paper tries to determine the most important collocations in Marine Engineering English, based on a small corpus of collected e-mails. After determining the most relevant collocations, we suggest how to implement these in the language classroom and how to improve the collocational competence of marine engineering students.
Designing interactively with elastic splines
DEFF Research Database (Denmark)
Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie
2018-01-01
We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....
Collocations and collocation types in ESP textbooks: Quantitative pedagogical analysis
Directory of Open Access Journals (Sweden)
Bogdanović Vesna Ž.
2016-01-01
Full Text Available The term collocation, even though it is rather common in the English language grammar, it is not a well known or commonly used term in the textbooks and scientific papers written in the Serbian language. Collocating is usually defined as a natural appearance of two (or more words, which are usually one next to another even though they can be separated in the text, while collocations are defined as words with natural semantic and/or syntactic relations being joined together in a sentence. Collocations are naturally used in all English written texts, including scientific texts and papers. Using two textbooks for English for Specific Purposes (ESP for intermediate students' courses, this paper presents the frequency of collocations and their typology. The paper tries to investigate the relationship between lexical and grammatical collocations written in the ESP texts and the reasons for their presence. There is an overview of the most used subtypes of lexical collocations as well. Furthermore, on applying the basic corpus analysis based on the quantitative analysis, the paper presents the number of open, restricted and bound collocations in ESP texts, trying to draw conclusions on their frequency and hence the modes for their learning. There is also a section related to the number and usage of scientific collocations, both common scientific and narrow-professional ones. The conclusion is that the number of present collocations in the selected two textbooks imposes a demand for further analysis of these lexical connections, as well as new modes for their teaching and presentations to the English learning students.
Mobile Collocated Interactions With Wearables
DEFF Research Database (Denmark)
Lucero, Andrés; Wilde, Danielle; Robinson, Simon
2015-01-01
Research on mobile collocated interactions has been looking at situations in which collocated users engage in collaborative activities using their mobile devices, thus going from personal/individual toward shared/multiuser experiences and interactions. However, computers are getting smaller, more...
Collocation Impact on Team Effectiveness
Directory of Open Access Journals (Sweden)
M Eccles
2010-11-01
Full Text Available The collocation of software development teams is common, specially in agile software development environments. However little is known about the impact of collocation on the team’s effectiveness. This paper explores the impact of collocating agile software development teams on a number of team effectiveness factors. The study focused on South African software development teams and gathered data through the use of questionnaires and interviews. The key finding was that collocation has a positive impact on a number of team effectiveness factors which can be categorised under team composition, team support, team management and structure and team communication. Some of the negative impact collocation had on team effectiveness relate to the fact that team members perceived that less emphasis was placed on roles, that morale of the group was influenced by individuals, and that collocation was invasive, reduced level of privacy and increased frequency of interruptions. Overall through it is proposed that companies should consider collocating their agile software development teams, as collocation might leverage overall team effectiveness.
Rectangular spectral collocation
Driscoll, Tobin A.
2015-02-06
Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon resampling differentiated polynomials into a lower-degree subspace makes differentiation matrices, and operators built from them, rectangular without any row deletions. Then, boundary and interface conditions can be adjoined to yield a square system. The resulting method is both flexible and robust, and avoids ambiguities that arise when applying the classical row deletion method outside of two-point scalar boundary-value problems. The new method is the basis for ordinary differential equation solutions in Chebfun software, and is demonstrated for a variety of boundary-value, eigenvalue and time-dependent problems.
B-splines and Faddeev equations
International Nuclear Information System (INIS)
Huizing, A.J.
1990-01-01
Two numerical methods for solving the three-body equations describing relativistic pion deuteron scattering have been investigated. For separable two body interactions these equations form a set of coupled one-dimensional integral equations. They are plagued by singularities which occur in the kernel of the integral equations as well as in the solution. The methods to solve these equations differ in the way they treat the singularities. First the Fuda-Stuivenberg method is discussed. The basic idea of this method is an one time iteration of the set of integral equations to treat the logarithmic singularities. In the second method, the spline method, the unknown solution is approximated by splines. Cubic splines have been used with cubic B-splines as basis. If the solution is approximated by a linear combination of basis functions, an integral equation can be transformed into a set of linear equations for the expansion coefficients. This set of linear equations is solved by standard means. Splines are determined by points called knots. A proper choice of splines to approach the solution stands for a proper choice of the knots. The solution of the three-body scattering equations has a square root behaviour at a certain point. Hence it was investigated how the knots should be chosen to approximate the square root function by cubic B-splines in an optimal way. Before applying this method to solve numerically the three-body equations describing pion-deuteron scattering, an analytically solvable example has been constructed with a singularity structure of both kernel and solution comparable to those of the three-body equations. The accuracy of the numerical solution was determined to a large extent by the accuracy of the approximation of the square root part. The results for a pion laboratory energy of 47.4 MeV agree very well with those from literature. In a complete calculation for 47.7 MeV the spline method turned out to be a factor thousand faster than the Fuda
Interpolation of natural cubic spline
Directory of Open Access Journals (Sweden)
Arun Kumar
1992-01-01
Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.
Translating English Idioms and Collocations
Directory of Open Access Journals (Sweden)
Rochayah Machali
2004-01-01
Full Text Available Learners of English should be made aware of the nature, types, and use of English idioms. This paper disensses the nature of idioms and collocations and translation issues related to them
Color management with a hammer: the B-spline fitter
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
International Nuclear Information System (INIS)
Fletcher, S.K.
2002-01-01
1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes
Application of multivariate splines to discrete mathematics
Xu, Zhiqiang
2005-01-01
Using methods developed in multivariate splines, we present an explicit formula for discrete truncated powers, which are defined as the number of non-negative integer solutions of linear Diophantine equations. We further use the formula to study some classical problems in discrete mathematics as follows. First, we extend the partition function of integers in number theory. Second, we exploit the relation between the relative volume of convex polytopes and multivariate truncated powers and giv...
Spline techniques for magnetic fields
International Nuclear Information System (INIS)
Aspinall, J.G.
1984-01-01
This report is an overview of B-spline techniques, oriented toward magnetic field computation. These techniques form a powerful mathematical approximating method for many physics and engineering calculations. In section 1, the concept of a polynomial spline is introduced. Section 2 shows how a particular spline with well chosen properties, the B-spline, can be used to build any spline. In section 3, the description of how to solve a simple spline approximation problem is completed, and some practical examples of using splines are shown. All these sections deal exclusively in scalar functions of one variable for simplicity. Section 4 is partly digression. Techniques that are not B-spline techniques, but are closely related, are covered. These methods are not needed for what follows, until the last section on errors. Sections 5, 6, and 7 form a second group which work toward the final goal of using B-splines to approximate a magnetic field. Section 5 demonstrates how to approximate a scalar function of many variables. The necessary mathematics is completed in section 6, where the problems of approximating a vector function in general, and a magnetic field in particular, are examined. Finally some algorithms and data organization are shown in section 7. Section 8 deals with error analysis
A Line-Tau Collocation Method for Partial Differential Equations ...
African Journals Online (AJOL)
This paper deals with the numerical solution of second order linear partial differential equations with the use of the method of lines coupled with the tau collocation method. The method of lines is used to convert the partial differential equation (PDE) to a sequence of ordinary differential equations (ODEs) which is then ...
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping; Huang, Jianhua Z.; Zhang, Nan
2015-01-01
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Straight-sided Spline Optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using design modifications, that do not change the spline load carrying capacity, it is shown that large...
Crosstalk statistics via collocation method
Diouf, F.; Canavero, Flavio
2009-01-01
A probabilistic model for the evaluation of transmission lines crosstalk is proposed. The geometrical parameters are assumed to be unknown and the exact solution is decomposed into two functions, one depending solely on the random parameters and the other on the frequency. The stochastic collocation
On Characterization of Quadratic Splines
DEFF Research Database (Denmark)
Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong
2005-01-01
that the representation can be refined in a neighborhood of a non-degenerate point and a set of non-degenerate minimizers. Based on these characterizations, many existing algorithms for specific convex quadratic splines are also finite convergent for a general convex quadratic spline. Finally, we study the relationship...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general....
Tomographic reconstruction with B-splines surfaces
International Nuclear Information System (INIS)
Oliveira, Eric F.; Dantas, Carlos C.; Melo, Silvio B.; Mota, Icaro V.; Lira, Mailson
2011-01-01
Algebraic reconstruction techniques when applied to a limited number of data usually suffer from noise caused by the process of correction or by inconsistencies in the data coming from the stochastic process of radioactive emission and oscillation equipment. The post - processing of the reconstructed image with the application of filters can be done to mitigate the presence of noise. In general these processes also attenuate the discontinuities present in edges that distinguish objects or artifacts, causing excessive blurring in the reconstructed image. This paper proposes a built-in noise reduction at the same time that it ensures adequate smoothness level in the reconstructed surface, representing the unknowns as linear combinations of elements of a piecewise polynomial basis, i.e. a B-splines basis. For that, the algebraic technique ART is modified to accommodate the first, second and third degree bases, ensuring C 0 , C 1 and C 2 smoothness levels, respectively. For comparisons, three methodologies are applied: ART, ART post-processed with regular B-splines filters (ART*) and the proposed method with the built-in B-splines filter (BsART). Simulations with input data produced from common mathematical phantoms were conducted. For the phantoms used the BsART method consistently presented the smallest errors, among the three methods. This study has shown the superiority of the change made to embed the filter in the ART when compared to the post-filtered ART. (author)
Efficient GPU-based texture interpolation using uniform B-splines
Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.
2008-01-01
This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and
Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.
2013-01-01
Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.
Shen, Xiang; Liu, Bin; Li, Qing-Quan
2017-03-01
The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates
Improving academic literacy by teaching collocations | Nizonkiza ...
African Journals Online (AJOL)
Stellenbosch Papers in Linguistics ... Abstract. This study explores the effect of teaching collocations on building academic vocabulary and hence improving academic writing abilities. ... They were presented with a completion task and an essay-writing task before and after being exposed to a collocation-based syllabus.
Supporting Collocation Learning with a Digital Library
Wu, Shaoqun; Franken, Margaret; Witten, Ian H.
2010-01-01
Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…
Measuring receptive collocational competence across proficiency ...
African Journals Online (AJOL)
The present study investigates (i) English as Foreign Language (EFL) learners' receptive collocational knowledge growth in relation to their linguistic proficiency level; (ii) how much receptive collocational knowledge is acquired as linguistic proficiency develops; and (iii) the extent to which receptive knowledge of ...
"Minimum input, maximum output, indeed!" Teaching Collocations ...
African Journals Online (AJOL)
Fifty-nine EFL college students participated in the study, and they received two 75-minute instructions between pre- and post-tests: one on the definition of colloca-tion and its importance, and the other on the skill of looking up collocational information in the Naver Dictionary — an English–Korean online dictionary. During ...
Shape Preserving Interpolation Using C2 Rational Cubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2016-01-01
Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.
Quasi interpolation with Voronoi splines.
Mirzargar, Mahsa; Entezari, Alireza
2011-12-01
We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE
Symmetric, discrete fractional splines and Gabor systems
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2006-01-01
In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....
RBF Multiscale Collocation for Second Order Elliptic Boundary Value Problems
Farrell, Patricio
2013-01-01
In this paper, we discuss multiscale radial basis function collocation methods for solving elliptic partial differential equations on bounded domains. The approximate solution is constructed in a multilevel fashion, each level using compactly supported radial basis functions of smaller scale on an increasingly fine mesh. On each level, standard symmetric collocation is employed. A convergence theory is given, which builds on recent theoretical advances for multiscale approximation using compactly supported radial basis functions. We are able to show that the convergence is linear in the number of levels. We also discuss the condition numbers of the arising systems and the effect of simple, diagonal preconditioners, now proving rigorously previous numerical observations. © 2013 Society for Industrial and Applied Mathematics.
Isogeometric analysis using T-splines
Bazilevs, Yuri
2010-01-01
We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.
Spline approximation, Part 1: Basic methodology
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Pseudospectral collocation methods for fourth order differential equations
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
Spline and spline wavelet methods with applications to signal and image processing
Averbuch, Amir Z; Zheludev, Valery A
This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...
GUESSING VERB-ADVERB COLLOCATIONS: ARAB EFL ...
African Journals Online (AJOL)
user
In the sections to follow, the concept and meaning of collocation is defined ... expressions (Alexander 1984); formulaic language or speech (Weinert 1995); multi- ... Two further studies reported Arab EFL learners' overall ignorance of col-.
Slovene-English Contrastive Phraseology: Lexical Collocations
Directory of Open Access Journals (Sweden)
Primož Jurko
2010-05-01
Full Text Available Phraseology is seen as one of the key elements and arguably the most productive part of any language. %e paper is focused on collocations and separates them from other phraseological units, such as idioms or compounds. Highlighting the difference between a monolingual and a bilingual (i.e. contrastive approach to collocation, the article presents two distinct classes of collocations: grammatical and lexical. %e latter, treated contrastively, represent the focal point of the paper, since they are an unending source of translation errors to both students of translation and professional translators. %e author introduces a methodology of systematic classification of lexical collocations applied on the Slovene-English language pair and based on structural (lexical congruence and semantic (translational predictability criteria.
Goudarzi, Zahra; Moini, M. Raouf
2012-01-01
Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…
He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han
2015-01-01
Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...
Thin-plate spline quadrature of geodetic integrals
Vangysen, Herman
1989-01-01
Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.
Numerical solution of system of boundary value problems using B-spline with free parameter
Gupta, Yogesh
2017-01-01
This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.
Construction of local integro quintic splines
Directory of Open Access Journals (Sweden)
T. Zhanlav
2016-06-01
Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.
Optimization of straight-sided spline design
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using different design modifications, that do not change the spline load carrying capacity, it is shown...
Energy Technology Data Exchange (ETDEWEB)
Saha Ray, S., E-mail: santanusaharay@yahoo.com; Patra, A.
2014-10-15
Highlights: • A stationary transport equation has been solved using the technique of Haar wavelet collocation method. • This paper intends to provide the great utility of Haar wavelets to nuclear science problem. • In the present paper, two-dimensional Haar wavelets are applied. • The proposed method is mathematically very simple, easy and fast. - Abstract: In this paper the numerical solution for the fractional order stationary neutron transport equation is presented using Haar wavelet Collocation Method (HWCM). Haar wavelet collocation method is efficient and powerful in solving wide class of linear and nonlinear differential equations. This paper intends to provide an application of Haar wavelets to nuclear science problems. This paper describes the application of Haar wavelets for the numerical solution of fractional order stationary neutron transport equation in homogeneous medium with isotropic scattering. The proposed method is mathematically very simple, easy and fast. To demonstrate about the efficiency and applicability of the method, two test problems are discussed.
MHD stability analysis using higher order spline functions
Energy Technology Data Exchange (ETDEWEB)
Ida, Akihiro [Department of Energy Engineering and Science, Graduate School of Engineering, Nagoya University, Nagoya, Aichi (Japan); Todoroki, Jiro; Sanuki, Heiji
1999-04-01
The eigenvalue problem of the linearized magnetohydrodynamic (MHD) equation is formulated by using higher order spline functions as the base functions of Ritz-Galerkin approximation. When the displacement vector normal to the magnetic surface (in the magnetic surface) is interpolated by B-spline functions of degree p{sub 1} (degree p{sub 2}), which is continuously c{sub 1}-th (c{sub 2}-th) differentiable on neighboring finite elements, the sufficient conditions for the good approximation is given by p{sub 1}{>=}p{sub 2}+1, c{sub 1}{<=}c{sub 2}+1, (c{sub 1}{>=}1, p{sub 2}{>=}c{sub 2}{>=}0). The influence of the numerical integration upon the convergence of calculated eigenvalues is discussed. (author)
The relationship between productive knowledge of collocations and ...
African Journals Online (AJOL)
This research explores tertiary level L2 students' mastery of the collocations pertaining to the Academic Word List (AWL) and the extent to which their knowledge of collocations grows alongside their academic literacy. A collocation test modelled on Laufer and Nation (1999), with target words selected from Coxhead's (2000) ...
Testing controlled productive knowledge of adverb-verb collocations ...
African Journals Online (AJOL)
A controlled productive test of adverb-verb collocations ..... The third approach to studying collocations, corpus analysis, ..... The collocation web model is thought to match Nation's (2001) psychological .... Theory, analysis, and applications. .... Canadian Modern ... Focus on vocabulary: Mastering the Academic Word List.
Testing controlled productive knowledge of adverb-verb collocations ...
African Journals Online (AJOL)
The study also reveals that controlled productive knowledge of adverbverb collocations is less problematic. Based on these results, teaching strategies aimed at improving the use of adverb-verb collocations among EFL users are proposed. Keywords: academic writing, adverb-verb collocations, productive knowledge of ...
The structure of an Afrikaans collocation and phrase dictionary | Otto ...
African Journals Online (AJOL)
As one of the target groups is unsophisticated learners with a limited grammatical background, the ideal would be to enter lexical collocations both at their bases and at the collocators. To save space however, more information such as examples could then be provided at the bases only. Grammatical collocations should be ...
The Presentation and Treatment of Collocations as Secondary ...
African Journals Online (AJOL)
Although the discussion primarily focuses on printed dictionaries proposals are also made for the presentation of collocations in online dictionaries. Keywords: Article structure, collocation, complex collocation, cotext, example sentences, integrated microstructure, non-grouped ordering, search zone, semi-integrated ...
Measuring receptive collocational competence across proficiency ...
African Journals Online (AJOL)
Kate H
frequency bands. A proficiency measure and a collocation test were administered to English ... battery may negatively impact the test-takers' performance. ..... examples. The major finding is that raising learners' awareness constitutes the best way forward ..... Amsterdam: John Benjamins Publishing Company. Green, R.
Improving academic literacy by teaching collocations
African Journals Online (AJOL)
Kate H
version of McCarthy and O'Dell's (2005) collocation web model were the techniques adopted ... both cued recall and essay writing, supporting earlier findings (cf. ..... from a 'holistic' representation of formulaic sequences in memory” (Boers et al. ... their study indicate that non-native speakers also retain words as they appear ...
Acoustic scattering by multiple elliptical cylinders using collocation multipole method
International Nuclear Information System (INIS)
Lee, Wei-Ming
2012-01-01
This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical–cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.
Evaluating a new test of whole English collocations
DEFF Research Database (Denmark)
Revier, Robert Lee
2009-01-01
in their own right and, as such, feature formal, semantic, and usage properties similar to those borne by single words. Third, the semantic properties of the constituent words that combine to form collocations are likely to play a role in EFL learners' ability to 'produce' English collocations. Forth, testing...... of L2 collocation knowledge needs to focus on the recognition and production of whole collocations. It is this set of assumptions that the new collocation test presented in this chapter is desined to probe. More specifically, the test is designed to assess L2 learners' productive knowledge of whole...
Positivity Preserving Interpolation Using Rational Bicubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2015-01-01
Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.
Lexical and Grammatical Collocations in Writing Production of EFL Learners
Directory of Open Access Journals (Sweden)
Maryam Bahardoust
2012-05-01
Full Text Available Lewis (1993 recognized significance of word combinations including collocations by presenting lexical approach. Because of the crucial role of collocation in vocabulary acquisition, this research set out to evaluate the rate of collocations in Iranian EFL learners' writing production across L1 and L2. In addition, L1 interference with L2 collocational use in the learner' writing samples was studied. To achieve this goal, 200 Persian EFL learners at BA level were selected. These participants were taking paragraph writing and essay writing courses in two successive semesters. As for the data analysis, mid-term, final exam, and also the assignments of L2 learners were evaluated. Because of the nominal nature of the data, chi-square test was utilized for data analysis. Then, the rate of lexical and grammatical collocations was calculated. Results showed that the lexical collocations outnumbered the grammatical collocations. Different categories of lexical collocations were also compared with regard to their frequencies in EFL writing production. The rate of the verb-noun and adjective-noun collocations appeared to be the highest and noun-verb collocations the lowest. The results also showed that L1 had both positive and negative effect on the occurrence of both grammatical and lexical collocations.
Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation
Directory of Open Access Journals (Sweden)
Ahmad Bilfarsah
2005-04-01
Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.
P-Splines Using Derivative Information
Calderon, Christopher P.; Martinez, Josue G.; Carroll, Raymond J.; Sorensen, Danny C.
2010-01-01
in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between
Multidimensional splines for modeling FET nonlinearities
Energy Technology Data Exchange (ETDEWEB)
Barby, J A
1986-01-01
Circuit simulators like SPICE and timing simulators like MOTIS are used extensively for critical path verification of integrated circuits. MOSFET model evaluation dominates the run time of these simulators. Changes in technology results in costly updates, since modifications require reprogramming of the functions and their derivatives. The computational cost of MOSFET models can be reduced by using multidimensional polynomial splines. Since simulators based on the Newton Raphson algorithm require the function and first derivative, quadratic splines are sufficient for this purpose. The cost of updating the MOSFET model due to technology changes is greatly reduced since splines are derived from a set of points. Crucial for convergence speed of simulators is the fact that MOSFET characteristic equations are monotonic. This must be maintained by any simulation model. The splines the author designed do maintain monotonicity.
On convexity and Schoenberg's variation diminishing splines
International Nuclear Information System (INIS)
Feng, Yuyu; Kozak, J.
1992-11-01
In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs
Relation work in collocated and distributed collaboration
DEFF Research Database (Denmark)
Christensen, Lars Rune; Jensen, Rasmus Eskild; Bjørn, Pernille
2014-01-01
Creating social ties are important for collaborative work; however, in geographically distributed organizations e.g. global software development, making social ties requires extra work: Relation work. We find that characteristics of relation work as based upon shared history and experiences......, emergent in personal and often humorous situations. Relation work is intertwined with other activities such as articulation work and it is rhythmic by following the work patterns of the participants. By comparing how relation work is conducted in collocated and geographically distributed settings we...... in this paper identify basic differences in relation work. Whereas collocated relation work is spontaneous, place-centric, and yet mobile, relation work in a distributed setting is semi-spontaneous, technology-mediated, and requires extra efforts....
P-Splines Using Derivative Information
Calderon, Christopher P.
2010-01-01
Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.
Linear Methods for Image Interpolation
Pascal Getreuer
2011-01-01
We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.
Semantic Analysis of Verbal Collocations with Lexical Functions
Gelbukh, Alexander
2013-01-01
This book is written for both linguists and computer scientists working in the field of artificial intelligence as well as to anyone interested in intelligent text processing. Lexical function is a concept that formalizes semantic and syntactic relations between lexical units. Collocational relation is a type of institutionalized lexical relations which holds between the base and its partner in a collocation. Knowledge of collocation is important for natural language processing because collocation comprises the restrictions on how words can be used together. The book shows how collocations can be annotated with lexical functions in a computer readable dictionary - allowing their precise semantic analysis in texts and their effective use in natural language applications including parsers, high quality machine translation, periphrasis system and computer-aided learning of lexica. The books shows how to extract collocations from corpora and annotate them with lexical functions automatically. To train algorithms,...
On Collocations and Their Interaction with Parsing and Translation
Directory of Open Access Journals (Sweden)
Violeta Seretan
2013-10-01
Full Text Available We address the problem of automatically processing collocations—a subclass of multi-word expressions characterized by a high degree of morphosyntactic flexibility—in the context of two major applications, namely, syntactic parsing and machine translation. We show that parsing and collocation identification are processes that are interrelated and that benefit from each other, inasmuch as syntactic information is crucial for acquiring collocations from corpora and, vice versa, collocational information can be used to improve parsing performance. Similarly, we focus on the interrelation between collocations and machine translation, highlighting the use of translation information for multilingual collocation identification, as well as the use of collocational knowledge for improving translation. We give a panorama of the existing relevant work, and we parallel the literature surveys with our own experiments involving a symbolic parser and a rule-based translation system. The results show a significant improvement over approaches in which the corresponding tasks are decoupled.
A Bayesian-optimized spline representation of the electrocardiogram
International Nuclear Information System (INIS)
Guilak, F G; McNames, J
2013-01-01
We introduce an implementation of a novel spline framework for parametrically representing electrocardiogram (ECG) waveforms. This implementation enables a flexible means to study ECG structure in large databases. Our algorithm allows researchers to identify key points in the waveform and optimally locate them in long-term recordings with minimal manual effort, thereby permitting analysis of trends in the points themselves or in metrics derived from their locations. In the work described here we estimate the location of a number of commonly-used characteristic points of the ECG signal, defined as the onsets, peaks, and offsets of the P, QRS, T, and R′ waves. The algorithm applies Bayesian optimization to a linear spline representation of the ECG waveform. The location of the knots—which are the endpoints of the piecewise linear segments used in the spline representation of the signal—serve as the estimate of the waveform’s characteristic points. We obtained prior information of knot times, amplitudes, and curvature from a large manually-annotated training dataset and used the priors to optimize a Bayesian figure of merit based on estimated knot locations. In cases where morphologies vary or are subject to noise, the algorithm relies more heavily on the estimated priors for its estimate of knot locations. We compared optimized knot locations from our algorithm to two sets of manual annotations on a prospective test data set comprising 200 beats from 20 subjects not in the training set. Mean errors of characteristic point locations were less than four milliseconds, and standard deviations of errors compared favorably against reference values. This framework can easily be adapted to include additional points of interest in the ECG signal or for other biomedical detection problems on quasi-periodic signals. (paper)
Directory of Open Access Journals (Sweden)
Van Than Dung
Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
2010-10-01
... standards or any other performance standards. An incumbent LEC that denies collocation of a competitor's equipment, citing safety standards, must provide to the competitive LEC within five business days of the... incumbent LEC contends the competitor's equipment fails to meet. This affidavit must set forth in detail...
PEMODELAN REGRESI SPLINE (Studi Kasus: Herpindo Jaya Cabang Ngaliyan
Directory of Open Access Journals (Sweden)
I MADE BUDIANTARA PUTRA
2015-06-01
Full Text Available Regression analysis is a method of data analysis to describe the relationship between response variables and predictor variables. There are two approaches to estimating the regression function. They are parametric and nonparametric approaches. The parametric approach is used when the relationship between the predictor variables and the response variables are known or the shape of the regression curve is known. Meanwhile, the nonparametric approach is used when the form of the relationship between the response and predictor variables is unknown or no information about the form of the regression function. The aim of this study are to determine the best spline nonparametric regression model on data of quality of the product, price, and advertising on purchasing decisions of Yamaha motorcycle with optimal knots point and to compare it with the multiple regression linear based on the coefficient of determination (R2 and mean square error (MSE. Optimal knot points are defined by two point knots. The result of this analysis is that for this data multiple regression linear is better than the spline regression one.
Verb-Noun Collocation Proficiency and Academic Years
Directory of Open Access Journals (Sweden)
Fatemeh Ebrahimi-Bazzaz
2014-01-01
Full Text Available Generally vocabulary and collocations in particular have significant roles in language proficiency. A collocation includes two words that are frequently joined concurrently in the memory of native speakers. There have been many linguistic studies trying to define, to describe, and to categorise English collocations. It contains grammatical collocations and lexical collocations which include nouns, adjectives, verbs, and adverb. In the context of a foreign language environment such as Iran, collocational proficiency can be useful because it helps the students improve their language proficiency. This paper investigates the possible relationship between verb-noun collocation proficiency among students from one academic year to the next. To reach this goal, a test of verb-noun collocations was administered to Iranian learners. The participants in the study were 212 Iranian students in an Iranian university. They were selected from the second term of freshman, sophomore, junior, and senior years. The students’ age ranged from 18 to 35.The results of ANOVA showed there was variability in the verb-noun collocations proficiency within each academic year and between the four academic years. The results of a post hoc multiple comparison tests demonstrated that the means are significantly different between the first year and the third and fourth years, and between the third and the fourth academic year; however, students require at least two years to show significant development in verb-noun collocation proficiency. These findings provided a vital implication that lexical collocations are learnt and developed through four academic years of university, but requires at least two years showing significant development in the language proficiency.
Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year
Kamaruddin, Halim Shukri; Ismail, Noriszura
2014-06-01
Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.
Deconvolution using thin-plate splines
International Nuclear Information System (INIS)
Toussaint, Udo v.; Gori, Silvio
2007-01-01
The ubiquitous problem of estimating 2-dimensional profile information from a set of line integrated measurements is tackled with Bayesian probability theory by exploiting prior information about local smoothness. For this purpose thin-plate-splines (the 2-D minimal curvature analogue of cubic-splines in 1-D) are employed. The optimal number of support points required for inversion of 2-D tomographic problems is determined using model comparison. Properties of this approach are discussed and the question of suitable priors is addressed. Finally, we illustrated the properties of this approach with 2-D inversion results using data from line-integrated measurements from fusion experiments
COLLOCATION PHRASES IN RELATION TO OTHER LEXICAL PHRASES IN CROATIAN
Directory of Open Access Journals (Sweden)
Goranka Blagus Bartolec
2012-01-01
Full Text Available The paper analyzes the semantic and lexicological aspects of collocation phrases in Croatian with tendency to separate them from other lexical phrases in Croatian (terms, idioms, names. The collocation phrase is defined as a special lexical phrase on a syntagmatic level, based on the semantic correlation of the two individual lexical components in which their meanings are specified.
Learning and Teaching L2 Collocations: Insights from Research
Szudarski, Pawel
2017-01-01
The aim of this article is to present and summarize the main research findings in the area of learning and teaching second language (L2) collocations. Being a large part of naturally occurring language, collocations and other types of multiword units (e.g., idioms, phrasal verbs, lexical bundles) have been identified as important aspects of L2…
Teachability of collocations: The role of word frequency counts ...
African Journals Online (AJOL)
... beginner/low-intermediate students and only exceed the 2 000-word band from the upper-intermediate learning stage onwards, a suggestion in line with Nation's (2006) discussion on how to teach vocabulary. Keywords: collocation size, controlled productive knowledge, teachability of collocations, word frequency counts, ...
First-year University Students' Productive Knowledge of Collocations ...
African Journals Online (AJOL)
The present study examines productive knowledge of collocations of tertiary-level second language (L2) learners of English in an attempt to make estimates of the size of their knowledge. Participants involved first-year students at North-West University who sat a collocation test modelled on that developed by Laufer and ...
Collocations and Grammatical Patterns in a Multilingual Online Term ...
African Journals Online (AJOL)
This article considers the importance of including various types of collocations in a terminological database, with the aim of making this information available to the user via the user interface. We refer specifically to the inclusion of empirical and phraseological collocations, and information on grammatical patterning.
Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks
Menon, Sujatha; Mukundan, Jayakaran
2012-01-01
This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…
Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...
African Journals Online (AJOL)
Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.
Scripted Bodies and Spline Driven Animation
DEFF Research Database (Denmark)
Erleben, Kenny; Henriksen, Knud
2002-01-01
In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...
Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.
2016-01-01
Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…
Stochastic Collocation Applications in Computational Electromagnetics
Directory of Open Access Journals (Sweden)
Dragan Poljak
2018-01-01
Full Text Available The paper reviews the application of deterministic-stochastic models in some areas of computational electromagnetics. Namely, in certain problems there is an uncertainty in the input data set as some properties of a system are partly or entirely unknown. Thus, a simple stochastic collocation (SC method is used to determine relevant statistics about given responses. The SC approach also provides the assessment of related confidence intervals in the set of calculated numerical results. The expansion of statistical output in terms of mean and variance over a polynomial basis, via SC method, is shown to be robust and efficient approach providing a satisfactory convergence rate. This review paper provides certain computational examples from the previous work by the authors illustrating successful application of SC technique in the areas of ground penetrating radar (GPR, human exposure to electromagnetic fields, and buried lines and grounding systems.
Applications of the spline filter for areal filtration
International Nuclear Information System (INIS)
Tong, Mingsi; Zhang, Hao; Ott, Daniel; Chu, Wei; Song, John
2015-01-01
This paper proposes a general use isotropic areal spline filter. This new areal spline filter can achieve isotropy by approximating the transmission characteristic of the Gaussian filter. It can also eliminate the effect of void areas using a weighting factor, and resolve end-effect issues by applying new boundary conditions, which replace the first order finite difference in the traditional spline formulation. These improvements make the spline filter widely applicable to 3D surfaces and extend the applications of the spline filter in areal filtration. (technical note)
Energy Technology Data Exchange (ETDEWEB)
Viswanathan, K.K.; Kim, Kyung Su; Lee, Jang Hyun [Inha Univ., Incheon (Korea). Dept. of Naval Architecture and Ocean Engineering
2009-12-15
Asymmetric free vibrations of annular cross-ply circular plates are studied using spline function approximation. The governing equations are formulated including the effects of shear deformation and rotary inertia. Assumptions are made to study the cross-ply layered plates. A system of coupled differential equations are obtained in terms of displacement functions and rotational functions. These functions are approximated using Bickley- type spline functions of suitable order. Then the system is converted into the eigenvalue problem by applying the point collocation technique and suitable boundary conditions. Parametric studies have been made to investigate the effect of transverse shear deformation and rotary inertia on frequency parameter with respect to the circumferential node number, radii ratio and thickness to radius ratio for both symmetric and anti-symmetric cross-ply plates using various types of material properties. (orig.)
Productive knowledge of collocations may predict academic literacy
Directory of Open Access Journals (Sweden)
Van Dyk, Tobie
2016-12-01
Full Text Available The present study examines the relationship between productive knowledge of collocations and academic literacy among first year students at North-West University. Participants were administered a collocation test, the items of which were selected from Nation’s (2006 word frequency bands, i.e. the 2000-word, 3000-word, 5000-word bands; and the Academic Word List (Coxhead, 2000. The scores from the collocation test were compared to those from the Test of Academic Literacy Levels (version administered in 2012. The results of this study indicate that, overall, knowledge of collocations is significantly correlated with academic literacy, which is also observed at each of the frequency bands from which the items were selected. These results support Nizonkiza’s (2014 findings that a significant correlation between mastery of collocations of words from the Academic Word List and academic literacy exists; which is extended here to words from other frequency bands. They also confirm previous findings that productive knowledge of collocations increases alongside overall proficiency (cf. Gitsaki, 1999; Bonk, 2001; Eyckmans et al., 2004; Boers et al., 2006; Nizonkiza, 2011; among others. This study therefore concludes that growth in productive knowledge of collocations may entail growth in academic literacy; suggesting that productive use of collocations is linked to academic literacy to a considerable extent. In light of these findings, teaching strategies aimed to assist first year students meet academic demands posed by higher education and avenues to explore for further research are discussed. Especially, we suggest adopting a productive oriented approach to teaching collocations, which we believe may prove useful.
Bessel collocation approach for approximate solutions of Hantavirus infection model
Directory of Open Access Journals (Sweden)
Suayip Yuzbasi
2017-11-01
Full Text Available In this study, a collocation method is introduced to find the approximate solutions of Hantavirus infection model which is a system of nonlinear ordinary differential equations. The method is based on the Bessel functions of the first kind, matrix operations and collocation points. This method converts Hantavirus infection model into a matrix equation in terms of the Bessel functions of first kind, matrix operations and collocation points. The matrix equation corresponds to a system of nonlinear equations with the unknown Bessel coefficients. The reliability and efficiency of the suggested scheme are demonstrated by numerical applications and all numerical calculations have been done by using a program written in Maple.
Modified Chebyshev Collocation Method for Solving Differential Equations
Directory of Open Access Journals (Sweden)
M Ziaul Arif
2015-05-01
Full Text Available This paper presents derivation of alternative numerical scheme for solving differential equations, which is modified Chebyshev (Vieta-Lucas Polynomial collocation differentiation matrices. The Scheme of modified Chebyshev (Vieta-Lucas Polynomial collocation method is applied to both Ordinary Differential Equations (ODEs and Partial Differential Equations (PDEs cases. Finally, the performance of the proposed method is compared with finite difference method and the exact solution of the example. It is shown that modified Chebyshev collocation method more effective and accurate than FDM for some example given.
Developing and Evaluating a Web-Based Collocation Retrieval Tool for EFL Students and Teachers
Chen, Hao-Jan Howard
2011-01-01
The development of adequate collocational knowledge is important for foreign language learners; nonetheless, learners often have difficulties in producing proper collocations in the target language. Among the various ways of learning collocations, the DDL (data-driven learning) approach encourages independent learning of collocations and allows…
Corpus-Aided Business English Collocation Pedagogy: An Empirical Study in Chinese EFL Learners
Chen, Lidan
2017-01-01
This study reports an empirical study of an explicit instruction of corpus-aided Business English collocations and verifies its effectiveness in improving learners' collocation awareness and learner autonomy, as a result of which is significant improvement of learners' collocation competence. An eight-week instruction in keywords' collocations,…
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub
Marginal longitudinal semiparametric regression via penalized splines
Al Kadiri, M.
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Marginal longitudinal semiparametric regression via penalized splines
Al Kadiri, M.; Carroll, R.J.; Wand, M.P.
2010-01-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
A collocation--Galerkin finite element model of cardiac action potential propagation.
Rogers, J M; McCulloch, A D
1994-08-01
A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.
Energy Technology Data Exchange (ETDEWEB)
Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)
2013-07-01
Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)
Geometric and computer-aided spline hob modeling
Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.
2018-03-01
The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.
RBF Multiscale Collocation for Second Order Elliptic Boundary Value Problems
Farrell, Patricio; Wendland, Holger
2013-01-01
In this paper, we discuss multiscale radial basis function collocation methods for solving elliptic partial differential equations on bounded domains. The approximate solution is constructed in a multilevel fashion, each level using compactly
Collocations and grammatical patterns in a Multilingual Online Term ...
African Journals Online (AJOL)
user
equivalents for key concepts in the African languages, but also additional con- ... for, inter alia, computational identification and extraction of collocations exist; .... sult' is to be followed by a prepositional phrase in which the preposition is.
Recent advances in radial basis function collocation methods
Chen, Wen; Chen, C S
2014-01-01
This book surveys the latest advances in radial basis function (RBF) meshless collocation methods which emphasis on recent novel kernel RBFs and new numerical schemes for solving partial differential equations. The RBF collocation methods are inherently free of integration and mesh, and avoid tedious mesh generation involved in standard finite element and boundary element methods. This book focuses primarily on the numerical algorithms, engineering applications, and highlights a large class of novel boundary-type RBF meshless collocation methods. These methods have shown a clear edge over the traditional numerical techniques especially for problems involving infinite domain, moving boundary, thin-walled structures, and inverse problems. Due to the rapid development in RBF meshless collocation methods, there is a need to summarize all these new materials so that they are available to scientists, engineers, and graduate students who are interest to apply these newly developed methods for solving real world’s ...
The Use of English Collocations in Reader's Digest
Sinaga, Yudita Putri Nurani; Sinaga, Lidiman Sahat Martua
2014-01-01
This descriptive qualitative study is aimed at identifying and describing the types of free collocations found in the articles of Reader's Digest. By taking a sample of ten articles from different months for each year since 2003 up to 2012, it was found all the four productive free collocations were in the data. Type 4 (Determiner + Adjective + Noun) was the dominant type (53.92 %). This was possible because the adjective in the pattern included the present participle and past participle of v...
A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid
Sulaimanov, Z. M.; Shumilov, B. M.
2017-10-01
For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.
Image edges detection through B-Spline filters
International Nuclear Information System (INIS)
Mastropiero, D.G.
1997-01-01
B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab
Recursive B-spline approximation using the Kalman filter
Directory of Open Access Journals (Sweden)
Jens Jauch
2017-02-01
Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.
Improvement of neutron kinetics module in TRAC-BF1code: one-dimensional nodal collocation method
Energy Technology Data Exchange (ETDEWEB)
Jambrina, Ana; Barrachina, Teresa; Miro, Rafael; Verdu, Gumersindo, E-mail: ajambrina@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es [Universidade Politecnica de Valencia (UPV), Valencia (Spain); Soler, Amparo, E-mail: asoler@iberdrola.es [SEA Propulsion S.L., Madrid (Spain); Concejal, Alberto, E-mail: acbe@iberdrola.es [Iberdrola Ingenieria y Construcion S.A.U., Madrid (Spain)
2013-07-01
The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method. (author)
Continuous Groundwater Monitoring Collocated at USGS Streamgages
Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.
2012-12-01
USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Barton, Michael
2015-10-24
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Barton, Michael; Calo, Victor M.
2015-01-01
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
International Nuclear Information System (INIS)
Lubuma, M.S.
1991-05-01
The non uniquely solvable Radon boundary integral equation for the two-dimensional Stokes-Dirichlet problem on a non smooth domain is transformed into a well posed one by a suitable compact perturbation of the velocity double layer potential operator. The solution to the modified equation is decomposed into a regular part and a finite linear combination of intrinsic singular functions whose coefficients are computed from explicit formulae. Using these formulae, the classical collocation method, defined by continuous piecewise linear vector-valued basis functions, which converges slowly because of the lack of regularity of the solution, is improved into a collocation dual singular function method with optimal rates of convergence for the solution and for the coefficients of singularities. (author). 34 refs
International Nuclear Information System (INIS)
Athanasakis, I E; Papadopoulou, E P; Saridakis, Y G
2014-01-01
Fisher's equation has been widely used to model the biological invasion of single-species communities in homogeneous one dimensional habitats. In this study we develop high order numerical methods to accurately capture the spatiotemporal dynamics of the generalized Fisher equation, a nonlinear reaction-diffusion equation characterized by density dependent non-linear diffusion. Working towards this direction we consider strong stability preserving Runge-Kutta (RK) temporal discretization schemes coupled with the Hermite cubic Collocation (HC) spatial discretization method. We investigate their convergence and stability properties to reveal efficient HC-RK pairs for the numerical treatment of the generalized Fisher equation. The Hadamard product is used to characterize the collocation discretized non linear equation terms as a first step for the treatment of generalized systems of relevant equations. Numerical experimentation is included to demonstrate the performance of the methods
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Directory of Open Access Journals (Sweden)
Qing He
2018-01-01
Full Text Available In this paper, the particle size distribution is reconstructed using finite moments based on a converted spline-based method, in which the number of linear system of equations to be solved reduced from 4m × 4m to (m + 3 × (m + 3 for (m + 1 nodes by using cubic spline compared to the original method. The results are verified by comparing with the reference firstly. Then coupling with the Taylor-series expansion moment method, the evolution of particle size distribution undergoing Brownian coagulation and its asymptotic behavior are investigated.
Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs
Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul
2016-01-01
In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.
Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs
Haji Ali, Abdul Lateef
2016-01-06
In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.
Multi-Index Stochastic Collocation for random PDEs
Haji Ali, Abdul Lateef
2016-03-28
In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.
Multi-Index Stochastic Collocation for random PDEs
Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul
2016-01-01
In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Statistical analysis of sediment toxicity by additive monotone regression splines
Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.
2002-01-01
Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this
Exponential B-splines and the partition of unity property
DEFF Research Database (Denmark)
Christensen, Ole; Massopust, Peter
2012-01-01
We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...
About some properties of bivariate splines with shape parameters
Caliò, F.; Marchetti, E.
2017-07-01
The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.
Directory of Open Access Journals (Sweden)
Makram J. Geha
2011-01-01
Full Text Available Milk yield records (305d, 2X, actual milk yield of 123,639 registered first lactation Holstein cows were used to compare linear regression (y = β0 + β1X + e ,quadratic regression, (y = β0 + β1X + β2X2 + e cubic regression (y = β0 + β1X + β2X2 + β3X3 + e and fixed factor models, with cubic-spline interpolation models, for estimating the effects of inbreeding on milk yield. Ten animal models, all with herd-year-season of calving as fixed effect, were compared using the Akaike corrected-Information Criterion (AICc. The cubic-spline interpolation model with seven knots had the lowest AICc, whereas for all those labeled as "traditional", AICc was higher than the best model. Results from fitting inbreeding using a cubic-spline with seven knots were compared to results from fitting inbreeding as a linear covariate or as a fixed factor with seven levels. Estimates of inbreeding effects were not significantly different between the cubic-spline model and the fixed factor model, but were significantly different from the linear regression model. Milk yield decreased significantly at inbreeding levels greater than 9%. Variance component estimates were similar for the three models. Ranking of the top 100 sires with daughter records remained unaffected by the model used.
International Nuclear Information System (INIS)
Mittal, R.C.; Rohila, Rajni
2016-01-01
In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.
Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set
Directory of Open Access Journals (Sweden)
M Barezi
2011-03-01
Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.
LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA
Directory of Open Access Journals (Sweden)
T. Dokken
2015-08-01
Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Testing knowledge of whole English collocations available for use in written production
DEFF Research Database (Denmark)
Revier, Robert Lee
2014-01-01
Testing knowledge of whole English collocations available for use in written production: Developing tests for use with intermediate and advanced Danish learners (dansk resume nedenfor) The present foreign language acquisition research derives its impetus from four assumptions regarding knowledge...... of English collocations. These are: (a) collocation knowledge can be conceptualized as an independent knowledge construct, (b) collocations are lexical items in their own right, (c) testing of collocation knowledge should also target knowledge of whole collocations, and (d) the learning burden of a whole...... the development of Danish EFL learners’ productive knowledge of whole English collocations. Five empirical studies were designed to generate information that would shed light on the reliability and validity of the CONTRIX as a measure of collocation knowledge available for use in written production. Study 1...
Parand, Kourosh; Latifi, Sobhan; Delkhosh, Mehdi; Moayeri, Mohammad M.
2018-01-01
In the present paper, a new method based on the Generalized Lagrangian Jacobi Gauss (GLJG) collocation method is proposed. The nonlinear Kidder equation, which explains unsteady isothermal gas through a micro-nano porous medium, is a second-order two-point boundary value ordinary differential equation on the unbounded interval [0, ∞). Firstly, using the quasilinearization method, the equation is converted to a sequence of linear ordinary differential equations. Then, by using the GLJG collocation method, the problem is reduced to solving a system of algebraic equations. It must be mentioned that this equation is solved without domain truncation and variable changing. A comparison with some numerical solutions made and the obtained results indicate that the presented solution is highly accurate. The important value of the initial slope, y'(0), is obtained as -1.191790649719421734122828603800159364 for η = 0.5. Comparing to the best result obtained so far, it is accurate up to 36 decimal places.
Presenting collocates in a dictionary of computing and the Internet according to user needs
DEFF Research Database (Denmark)
Leroyer, Patrick; L'Homme, Marie-Claude; Jousse, Anne-Laure
2011-01-01
This paper presents a novel method for organizing and presenting collocations in a specialized dictionary of computing and the Internet. This work is undertaken in order to meet a specific user need, i.e. that of searching for a collocate (or a short list of collocates) that expresses a specific...
Examining Second Language Receptive Knowledge of Collocation and Factors That Affect Learning
Nguyen, Thi My Hang; Webb, Stuart
2017-01-01
This study investigated Vietnamese EFL learners' knowledge of verb-noun and adjective-noun collocations at the first three 1,000 word frequency levels, and the extent to which five factors (node word frequency, collocation frequency, mutual information score, congruency, and part of speech) predicted receptive knowledge of collocation. Knowledge…
Meshfree Local Radial Basis Function Collocation Method with Image Nodes
Energy Technology Data Exchange (ETDEWEB)
Baek, Seung Ki; Kim, Minjae [Pukyong National University, Busan (Korea, Republic of)
2017-07-15
We numerically solve two-dimensional heat diffusion problems by using a simple variant of the meshfree local radial-basis function (RBF) collocation method. The main idea is to include an additional set of sample nodes outside the problem domain, similarly to the method of images in electrostatics, to perform collocation on the domain boundaries. We can thereby take into account the temperature profile as well as its gradients specified by boundary conditions at the same time, which holds true even for a node where two or more boundaries meet with different boundary conditions. We argue that the image method is computationally efficient when combined with the local RBF collocation method, whereas the addition of image nodes becomes very costly in case of the global collocation. We apply our modified method to a benchmark test of a boundary value problem, and find that this simple modification reduces the maximum error from the analytic solution significantly. The reduction is small for an initial value problem with simpler boundary conditions. We observe increased numerical instability, which has to be compensated for by a sufficient number of sample nodes and/or more careful parameter choices for time integration.
Multimodal interaction design in collocated mobile phone use
El-Ali, A.; Lucero, A.; Aaltonen, V.
2011-01-01
In the context of the Social and Spatial Interactions (SSI) platform, we explore how multimodal interaction design (input and output) can augment and improve the experience of collocated, collaborative activities using mobile phones. Based largely on our prototype evaluations, we reflect on and
Sinc-collocation method for solving the Blasius equation
International Nuclear Information System (INIS)
Parand, K.; Dehghan, Mehdi; Pirkhedri, A.
2009-01-01
Sinc-collocation method is applied for solving Blasius equation which comes from boundary layer equations. It is well known that sinc procedure converges to the solution at an exponential rate. Comparison with Howarth and Asaithambi's numerical solutions reveals that the proposed method is of high accuracy and reduces the solution of Blasius' equation to the solution of a system of algebraic equations.
Lexical richness and collocational competence in second-language writing
Vedder, I.; Benigno, V.
2016-01-01
In this article we report on an experiment set up to investigate lexical richness and collocational competence in the written production of 39 low-intermediate and intermediate learners of Italian L2. Lexical richness was assessed by means of a lexical profiling method inspired by Laufer and Nation
Dabiri, Arman; Butcher, Eric A.; Nazari, Morad
2017-02-01
Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.
Linear Methods for Image Interpolation
Directory of Open Access Journals (Sweden)
Pascal Getreuer
2011-09-01
Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.
Higher order multipoles and splines in plasma simulations
International Nuclear Information System (INIS)
Okuda, H.; Cheng, C.Z.
1978-01-01
The reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and the spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular the spline method may be useful in three-dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length. (Auth.)
Higher-order multipoles and splines in plasma simulations
International Nuclear Information System (INIS)
Okuda, H.; Cheng, C.Z.
1977-12-01
Reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular, spline method may be useful in three dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length
Detrending of non-stationary noise data by spline techniques
International Nuclear Information System (INIS)
Behringer, K.
1989-11-01
An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs
Micropolar Fluids Using B-spline Divergence Conforming Spaces
Sarmiento, Adel
2014-06-06
We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.
Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl
2018-06-01
The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.
Modeling terminal ballistics using blending-type spline surfaces
Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune
2014-12-01
We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.
Fast compact algorithms and software for spline smoothing
Weinert, Howard L
2012-01-01
Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.
International Nuclear Information System (INIS)
Patra, A.; Saha Ray, S.
2014-01-01
Highlights: • A stationary transport equation has been solved using the technique of Haar wavelet Collocation Method. • This paper intends to provide the great utility of Haar wavelets to nuclear science problem. • In the present paper, two-dimensional Haar wavelets are applied. • The proposed method is mathematically very simple, easy and fast. - Abstract: This paper emphasizes on finding the solution for a stationary transport equation using the technique of Haar wavelet Collocation Method (HWCM). Haar wavelet Collocation Method is efficient and powerful in solving wide class of linear and nonlinear differential equations. Recently Haar wavelet transform has gained the reputation of being a very effective tool for many practical applications. This paper intends to provide the great utility of Haar wavelets to nuclear science problem. In the present paper, two-dimensional Haar wavelets are applied for solution of the stationary Neutron Transport Equation in homogeneous isotropic medium. The proposed method is mathematically very simple, easy and fast. To demonstrate about the efficiency of the method, one test problem is discussed. It can be observed from the computational simulation that the numerical approximate solution is much closer to the exact solution
Numerical solution of the controlled Duffing oscillator by semi-orthogonal spline wavelets
International Nuclear Information System (INIS)
Lakestani, M; Razzaghi, M; Dehghan, M
2006-01-01
This paper presents a numerical method for solving the controlled Duffing oscillator. The method can be extended to nonlinear calculus of variations and optimal control problems. The method is based upon compactly supported linear semi-orthogonal B-spline wavelets. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique
A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation
Directory of Open Access Journals (Sweden)
S. Battal Gazi Karakoç
2016-02-01
Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.
A collocation finite element method with prior matrix condensation
International Nuclear Information System (INIS)
Sutcliffe, W.J.
1977-01-01
For thin shells with general loading, sixteen degrees of freedom have been used for a previous finite element solution procedure using a Collocation method instead of the usual variational based procedures. Although the number of elements required was relatively small, nevertheless the final matrix for the simultaneous solution of all unknowns could become large for a complex compound structure. The purpose of the present paper is to demonstrate a method of reducing the final matrix size, so allowing solution for large structures with comparatively small computer storage requirements while retaining the accuracy given by high order displacement functions. Collocation points, a number are equilibrium conditions which must be satisfied independently of the overall compatibility of forces and deflections for a complete structure. (Auth.)
Part 6. Internationalization and collocation of FBR fuel cycle facilities
International Nuclear Information System (INIS)
Stevenson, M.G.; Abramson, P.B.; LeSage, L.G.
1980-01-01
This report examines some of the non-proliferation, technical, and institutional aspects of internationalization and/or collocation of major facilities of the Fast Breeder Reactor (FBR) fuel cycle. The national incentives and disincentives for establishment of FBR Fuel Cycle Centers are enumerated. The technical, legal, and administrative considerations in determining the feasibility of FBR Fuel Cycle Centers are addressed by making comparisons with Light Water Reactor (LWR) centers which have been studied in detail by the IAEA and UNSRC
Numerical simulation of GEW equation using RBF collocation method
Directory of Open Access Journals (Sweden)
Hamid Panahipour
2012-08-01
Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.
Application of collocation meshless method to eigenvalue problem
International Nuclear Information System (INIS)
Saitoh, Ayumu; Matsui, Nobuyuki; Itoh, Taku; Kamitani, Atsushi; Nakamura, Hiroaki
2012-01-01
The numerical method for solving the nonlinear eigenvalue problem has been developed by using the collocation Element-Free Galerkin Method (EFGM) and its performance has been numerically investigated. The results of computations show that the approximate solution of the nonlinear eigenvalue problem can be obtained stably by using the developed method. Therefore, it can be concluded that the developed method is useful for solving the nonlinear eigenvalue problem. (author)
Teaching vocabulary using collocations versus using definitions in EFL classes
Altınok, Şerife İper
2000-01-01
Ankara : Institute of Economics and Social Sciences of Bilkent Univ., 2000. Thesis (Master's) -- Bilkent University, 2000. Includes bibliographical references leaves 40-43 Teaching words in collocations is a comparatively new technique and it is accepted as an effective one in vocabulary teaching. The purpose of this study was to find out whether teaching vocabulary would result in better learning and remembering vocabulary items. This study investigated the differences betw...
Let's collocate: student generated worksheets as a motivational tool
Simpson, Adam John
2006-01-01
This article discusses the process of producing collocation worksheets and the values of these worksheets as a motivational tool within a tertiary level preparatory English program. Firstly, the method by which these worksheets were produced is described, followed by an analysis of their effectiveness as a resource in terms of student motivation, personalisation, involvement in the development of the curriculum and in raising awareness of corpus linguistics and its applications.
Analytic regularity and collocation approximation for elliptic PDEs with random domain deformations
Castrillon, Julio
2016-03-02
In this work we consider the problem of approximating the statistics of a given Quantity of Interest (QoI) that depends on the solution of a linear elliptic PDE defined over a random domain parameterized by N random variables. The elliptic problem is remapped onto a corresponding PDE with a fixed deterministic domain. We show that the solution can be analytically extended to a well defined region in CN with respect to the random variables. A sparse grid stochastic collocation method is then used to compute the mean and variance of the QoI. Finally, convergence rates for the mean and variance of the QoI are derived and compared to those obtained in numerical experiments.
Bäck, Joakim
2010-09-17
Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.
DEFF Research Database (Denmark)
Simurda, Matej; Lassen, Benny; Duggen, Lars
2017-01-01
A numerical model for a clamp-on transit-time ultrasonic flowmeter (TTUF) under multi-phase flow conditions is presented. The method solves equations of linear elasticity for isotropic heterogeneous materials with background flow where acoustic media are modeled by setting shear modulus to zero....... Spatial derivatives are calculated by a Fourier collocation method allowing the use of the fast Fourier transform (FFT) and time derivatives are approximated by a finite difference (FD) scheme. This approach is sometimes referred to as a pseudospectral time-domain method. Perfectly matched layers (PML......) are used to avoid wave-wrapping and staggered grids are implemented to improve stability and efficiency. The method is verified against exact analytical solutions and the effect of the time-staggering and associated lowest number of points per minimum wavelengths value is discussed. The method...
2007-08-01
In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods) rather than linear trajectories. The...coupling the radiative transport solution into heat transfer and damage models. 15. SUBJECT TERMS: B-Splines, Ray-Tracing, Eikonal Equation...multi-layer biological tissue model. In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods
Directory of Open Access Journals (Sweden)
T Nikbakht
2012-12-01
Full Text Available Effects of quantum size and potential shape on the spectra of an electron and a hydrogenic-donor at the center of a permeable spherical cavity have been calculated, using linear variational method. B-splines have been used as basis functions. By extensive convergence tests and comparing with other results given in the literature, the validity and efficiency of the method were confirmed.
An empirical understanding of triple collocation evaluation measure
Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang
2013-04-01
Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC
A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.
Li, Chin-Shang; Tu, Wanzhu
2007-05-01
In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.
Directory of Open Access Journals (Sweden)
Shanshan He
2015-10-01
Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.
Non-stationary hydrologic frequency analysis using B-spline quantile regression
Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.
2017-11-01
Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.
Friedline, Terri; Masa, Rainier D; Chowa, Gina A N
2015-01-01
The natural log and categorical transformations commonly applied to wealth for meeting the statistical assumptions of research may not always be appropriate for adjusting for skewness given wealth's unique properties. Finding and applying appropriate transformations is becoming increasingly important as researchers consider wealth as a predictor of well-being. We present an alternative transformation-the inverse hyperbolic sine (IHS)-for simultaneously dealing with skewness and accounting for wealth's unique properties. Using the relationship between household wealth and youth's math achievement as an example, we apply the IHS transformation to wealth data from US and Ghanaian households. We also explore non-linearity and accumulation thresholds by combining IHS transformed wealth with splines. IHS transformed wealth relates to youth's math achievement similarly when compared to categorical and natural log transformations, indicating that it is a viable alternative to other transformations commonly used in research. Non-linear relationships and accumulation thresholds emerge that predict youth's math achievement when splines are incorporated. In US households, accumulating debt relates to decreases in math achievement whereas accumulating assets relates to increases in math achievement. In Ghanaian households, accumulating assets between the 25th and 50th percentiles relates to increases in youth's math achievement. Copyright © 2014 Elsevier Inc. All rights reserved.
Bergström, Kerstin
2008-01-01
The aim of this study is to examine the vocabulary and receptive collocation knowledge in English among Swedish upper secondary school students. The primary material consists of two vocabulary tests, one collocation test, and a background questionnaire. The first research question concerns whether the students who receive a major part of their education in English have a higher level of vocabulary and receptive collocation knowledge in English than those who are taught primarily in Swedish. T...
[Multimodal medical image registration using cubic spline interpolation method].
He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan
2007-12-01
Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
Bayesian Analysis for Penalized Spline Regression Using WinBUGS
Directory of Open Access Journals (Sweden)
Ciprian M. Crainiceanu
2005-09-01
Full Text Available Penalized splines can be viewed as BLUPs in a mixed model framework, which allows the use of mixed model software for smoothing. Thus, software originally developed for Bayesian analysis of mixed models can be used for penalized spline regression. Bayesian inference for nonparametric models enjoys the flexibility of nonparametric models and the exact inference provided by the Bayesian inferential machinery. This paper provides a simple, yet comprehensive, set of programs for the implementation of nonparametric Bayesian analysis in WinBUGS. Good mixing properties of the MCMC chains are obtained by using low-rank thin-plate splines, while simulation times per iteration are reduced employing WinBUGS specific computational tricks.
Point based interactive image segmentation using multiquadrics splines
Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna
2017-05-01
Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6
The treatment of lexical collocations in EFL coursebooks in the Estonian secondary school context
Directory of Open Access Journals (Sweden)
Liina Vassiljev
2015-04-01
Full Text Available The article investigates lexical collocations encountered in English as a Foreign Language (EFL instruction in Estonian upper secondary schools. This is achieved through a statistical analysis of collocations featuring in three coursebooks where the collocations found are analysed in terms of their type, frequency and usefulness index by studying them through an online language corpus (Collins Wordbanks Online. The coursebooks are systematically compared and contrasted relying upon the data gathered. The results of the study reveal that the frequency and range of lexical collocations in a language corpus have not been regarded as an essential criterion for their selection and practice by any of the coursebook authors under discussion.
Simulation of electrically driven jet using Chebyshev collocation method
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
The model of electrically driven jet is governed by a series of quasi 1D dimensionless partial differential equations(PDEs).Following the method of lines,the Chebyshev collocation method is employed to discretize the PDEs and obtain a system of differential-algebraic equations(DAEs).By differentiating constrains in DAEs twice,the system is transformed into a set of ordinary differential equations(ODEs) with invariants.Then the implicit differential equations solver "ddaskr" is used to solve the ODEs and ...
Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)
Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.
2017-01-01
This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.
Fourier analysis of finite element preconditioned collocation schemes
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K
2007-01-01
3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.
Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.
Kainu, Annette; Timonen, Kirsi
2016-07-01
Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.
Counterexamples to the B-spline Conjecture for Gabor Frames
DEFF Research Database (Denmark)
Lemvig, Jakob; Nielsen, Kamilla Haahr
2016-01-01
The frame set conjecture for B-splines Bn, n≥2, states that the frame set is the maximal set that avoids the known obstructions. We show that any hyperbola of the form ab=r, where r is a rational number smaller than one and a and b denote the sampling and modulation rates, respectively, has infin...
C2-rational cubic spline involving tension parameters
Indian Academy of Sciences (India)
preferred which preserves some of the characteristics of the function to be interpolated. In order to tackle such ... Shape preserving properties of the rational (cubic/quadratic) spline interpolant have been studied ... tension parameters which is used to interpolate the given monotonic data is described in. [6]. Shape preserving ...
Spline function fit for multi-sets of correlative data
International Nuclear Information System (INIS)
Liu Tingjin; Zhou Hongmo
1992-01-01
The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation
Differential constraints for bounded recursive identification with multivariate splines
De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2011-01-01
The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able
Comparative Analysis for Robust Penalized Spline Smoothing Methods
Directory of Open Access Journals (Sweden)
Bin Wang
2014-01-01
Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.
Multivariate Epi-splines and Evolving Function Identification Problems
2015-04-15
such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction
Splines under tension for gridding three-dimensional data
International Nuclear Information System (INIS)
Brand, H.R.; Frazer, J.W.
1982-01-01
By use of the splines-under-tension concept, a simple algorithm has been developed for the three-dimensional representation of nonuniformly spaced data. The representations provide useful information to the experimentalist when he is attempting to understand the results obtained in a self-adaptive experiment. The shortcomings of the algorithm are discussed as well as the advantages
Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
Gimme Context – towards New Domain-Specific Collocational Dictionaries
Directory of Open Access Journals (Sweden)
Sylvana Krausse
2011-04-01
Full Text Available The days of traditional drudgery-filled lexicography are long gone. Fortunately today, computers help in the enormous task of storing and analysing language in order to condense and store the found information in the form of dictionaries. In this paper, the way from a corpus to a small domain-specific collocational dictionary will be described and thus exemplified based on the example of the domain-specific language of mining reclamation, which can be duplicated for other specific languages too. So far, domain-specific dictionaries are mostly rare as their creation is very labour- and thus cost-effective and all too often they are just a collection of terms plus translation without any information on how to use them in speech. Particular small domains which do not involve a lot of users have been disregarded by lexicographers as there is also always the question of how well it sells afterwards. Following this, I will describe the creation of a small collocational dictionary on mining reclamation language which is based on the consequent use of corpus information. It is relatively quick to realize in the design phase and is thought to provide the sort of linguistic information engineering experts need when they communicate in English or read specialist texts in the specific domain.
English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information
Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji
2012-01-01
We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…
Corpora and Collocations in Chinese-English Dictionaries for Chinese Users
Xia, Lixin
2015-01-01
The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…
Not Just "Small Potatoes": Knowledge of the Idiomatic Meanings of Collocations
Macis, Marijana; Schmitt, Norbert
2017-01-01
This study investigated learner knowledge of the figurative meanings of 30 collocations that can be both literal and figurative. One hundred and seven Chilean Spanish-speaking university students of English were asked to complete a meaning-recall collocation test in which the target items were embedded in non-defining sentences. Results showed…
DEFF Research Database (Denmark)
Henriksen, Birgit; Westbrook, Pete
2017-01-01
and classifying collocations used by L2 speakers in advanced, domain-specific oral academic discourse. The main findings seem to suggest that to map an informant’s complete collocational use and to get an understanding of disciplinary differences, we need to not only take account of general, academic and domain...
ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES
Directory of Open Access Journals (Sweden)
Joel DomÃnguez Viveros
2015-04-01
Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year â€“ season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.
Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data
Luo, Zhen
In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.
Splines and polynomial tools for flatness-based constrained motion planning
Suryawan, Fajar; De Doná, José; Seron, María
2012-08-01
This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.
Verb-Noun Collocations in Written Discourse of Iranian EFL Learners
Directory of Open Access Journals (Sweden)
Fatemeh Ebrahimi-Bazzaz
2015-07-01
Full Text Available When native speakers of English write, they employ both grammatical rules and collocations. Collocations are words that are present in the memory of native speakers as ready-made prefabricated chunks. Non-native speakers who wish to acquire native-like fluency should give appropriate attention to collocations in writing in order not to produce sentences that native speakers may consider odd. The present study tries to explore the use of verb-noun collocations in written discourse of English as foreign language (EFL among Iranian EFL learners from one academic year to the next in Iran. To measure the use of verb-noun collocations in written discourse, there was a 60-minute task of writing story based on a series of six pictures whereby for each picture, three verb-noun collocations were measured, and nouns were provided to limit the choice of collocations. The results of the statistical analysis of ANOVA for the research question indicated that there was a significant difference in the use of lexical verb-noun collocations in written discourse both between and within the four academic years. The results of a post hoc multiple comparison tests confirmed that the means are significantly different between the first year and the third and fourth years, between the second and the fourth, and between the third and the fourth academic year which indicate substantial development in verb-noun collocation proficiency. The vital implication is that the learners could use verb-noun collocations in productive skill of writing.
Directory of Open Access Journals (Sweden)
Yan Li
2018-01-01
Full Text Available A bidirectional B-spline QR method (BB-sQRM for the study on the crack control of the reinforced concrete (RC beam embedded with shape memory alloy (SMA wires is presented. In the proposed method, the discretization is performed with a set of spline nodes in two directions of the plane model, and structural displacement fields are constructed by the linear combination of the products of cubic B-spline interpolation functions. To derive the elastoplastic stiffness equation of the RC beam, an explicit form is utilized to express the elastoplastic constitutive law of concrete materials. The proposed model is compared with the ANSYS model in several numerical examples. The results not only show that the solutions given by the BB-sQRM are very close to those given by the finite element method (FEM but also prove the high efficiency and low computational cost of the BB-sQRM. Meanwhile, the five parameters, such as depth-span ratio, thickness of concrete cover, reinforcement ratio, prestrain, and eccentricity of SMA wires, are investigated to learn their effects on the crack control. The results show that depth-span ratio of RC beams and prestrain and eccentricity of SMA wires have a significant influence on the control performance of beam cracks.
Directory of Open Access Journals (Sweden)
Marko Wilke
2018-02-01
Full Text Available This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1–75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender as well as technical (field strength, data quality predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php. Keywords: MRI template creation, Multivariate adaptive regression splines, DARTEL, Structural MRI
Optimization of Low-Thrust Spiral Trajectories by Collocation
Falck, Robert D.; Dankanich, John W.
2012-01-01
As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.
FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines
International Nuclear Information System (INIS)
Hanson, R.J.; Haskell, K.H.
1989-01-01
1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved
Directory of Open Access Journals (Sweden)
I Nyoman Budiantara
2006-01-01
Full Text Available Regression analysis is constructed for capturing the influences of independent variables to dependent ones. It can be done by looking at the relationship between those variables. This task of approximating the mean function can be done essentially in two ways. The quiet often use parametric approach is to assume that the mean curve has some prespecified functional forms. Alternatively, nonparametric approach, .i.e., without reference to a specific form, is used when there is no information of the regression function form (Haerdle, 1990. Therefore nonparametric approach has more flexibilities than the parametric one. The aim of this research is to find the best fit model that captures relationship between admission test score to the GPA. This particular data was taken from the Department of Design Communication and Visual, Petra Christian University, Surabaya for year 1999. Those two approaches were used here. In the parametric approach, we use simple linear, quadric cubic regression, and in the nonparametric ones, we use B-Spline and Multivariate Adaptive Regression Splines (MARS. Overall, the best model was chosen based on the maximum determinant coefficient. However, for MARS, the best model was chosen based on the GCV, minimum MSE, maximum determinant coefficient. Abstract in Bahasa Indonesia : Analisa regresi digunakan untuk melihat pengaruh variabel independen terhadap variabel dependent dengan terlebih dulu melihat pola hubungan variabel tersebut. Hal ini dapat dilakukan dengan melalui dua pendekatan. Pendekatan yang paling umum dan seringkali digunakan adalah pendekatan parametrik. Pendekatan parametrik mengasumsikan bentuk model sudah ditentukan. Apabila tidak ada informasi apapun tentang bentuk dari fungsi regresi, maka pendekatan yang digunakan adalah pendekatan nonparametrik. (Haerdle, 1990. Karena pendekatan tidak tergantung pada asumsi bentuk kurva tertentu, sehingga memberikan fleksibelitas yang lebih besar. Tujuan penelitian ini
Analytic regularization of uniform cubic B-spline deformation fields.
Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C
2012-01-01
Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.
Data approximation using a blending type spline construction
International Nuclear Information System (INIS)
Dalmo, Rune; Bratlie, Jostein
2014-01-01
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C k -smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences
Sequential bayes estimation algorithm with cubic splines on uniform meshes
International Nuclear Information System (INIS)
Hossfeld, F.; Mika, K.; Plesser-Walk, E.
1975-11-01
After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de
Analysis of an upstream weighted collocation approximation to the transport equation
International Nuclear Information System (INIS)
Shapiro, A.; Pinder, G.F.
1981-01-01
The numerical behavior of a modified orthogonal collocation method, as applied to the transport equations, can be examined through the use of a Fourier series analysis. The necessity of such a study becomes apparent in the analysis of several techniques which emulate classical upstream weighting schemes. These techniques are employed in orthogonal collocation and other numerical methods as a means of handling parabolic partial differential equations with significant first-order terms. Divergent behavior can be shown to exist in one upstream weighting method applied to orthogonal collocation
USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES
Directory of Open Access Journals (Sweden)
Otgon Cristian
2011-07-01
Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third
Farrell, Patricio; Pestana, Jennifer
2015-01-01
. However, the benefit of a guaranteed symmetric positive definite block system comes at a high computational cost. This cost can be alleviated somewhat by considering compactly supported RBFs and a multiscale technique. But the condition number and sparsity
On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods
Beck, Joakim; Tempone, Raul; Nobile, Fabio; Tamellini, Lorenzo
2012-01-01
In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.
Parallel iterative solution of the Hermite Collocation equations on GPUs II
International Nuclear Information System (INIS)
Vilanakis, N; Mathioudakis, E
2014-01-01
Hermite Collocation is a high order finite element method for Boundary Value Problems modelling applications in several fields of science and engineering. Application of this integration free numerical solver for the solution of linear BVPs results in a large and sparse general system of algebraic equations, suggesting the usage of an efficient iterative solver especially for realistic simulations. In part I of this work an efficient parallel algorithm of the Schur complement method coupled with Bi-Conjugate Gradient Stabilized (BiCGSTAB) iterative solver has been designed for multicore computing architectures with a Graphics Processing Unit (GPU). In the present work the proposed algorithm has been extended for high performance computing environments consisting of multiprocessor machines with multiple GPUs. Since this is a distributed GPU and shared CPU memory parallel architecture, a hybrid memory treatment is needed for the development of the parallel algorithm. The realization of the algorithm took place on a multiprocessor machine HP SL390 with Tesla M2070 GPUs using the OpenMP and OpenACC standards. Execution time measurements reveal the efficiency of the parallel implementation
On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods
Beck, Joakim
2012-09-01
In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.
Collocation mismatch uncertainties in satellite aerosol retrieval validation
Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit
2018-02-01
Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the
International Nuclear Information System (INIS)
Maschek, W.
1976-07-01
A modified collocation method is used for solving the one group criticality problem for a uniform multiplying slab. The critical parameters and the angular fluxes for a number of slabs are displayed and compared with previously published values. (orig.) [de
An adaptive multi-element probabilistic collocation method for statistical EMC/EMI characterization
Yü cel, Abdulkadir C.; Bagci, Hakan; Michielssen, Eric
2013-01-01
polynomial chaos expansion of the observables. While constructing local polynomial expansions on each subdomain, a fast integral-equation-based deterministic field-cable-circuit simulator is used to compute the observable values at the collocation
Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul
2010-01-01
Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods
A stochastic collocation method for the second order wave equation with a discontinuous random speed
Motamed, Mohammad; Nobile, Fabio; Tempone, Raul
2012-01-01
In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical
High-frequency collocations of nouns in research articles across eight disciplines
Directory of Open Access Journals (Sweden)
Matthew Peacock
2012-04-01
Full Text Available This paper describes a corpus-based analysis of the distribution of the high-frequency collocates of abstract nouns in 320 research articles across eight disciplines: Chemistry, Computer Science, Materials Science, Neuroscience, Economics, Language and Linguistics, Management, and Psychology. Disciplinary variation was also examined – very little previous research seems to have investigated this. The corpus was analysed using WordSmith Tools. The 16 highest-frequency nouns across all eight disciplines were identified, followed by the highest-frequency collocates for each noun. Five disciplines showed over 50% variance from the overall results. Conclusions are that the differing patterns revealed are disciplinary norms and represent standard terminology within the disciplines arising from the topics discussed, research methods, and content of discussions. It is also concluded that the collocations are an important part of the meanings and functions of the nouns, and that this evidence of sharp discipline differences underlines the importance of discipline-specific collocation research.
Directory of Open Access Journals (Sweden)
Irena SRDANOVIĆ
2011-05-01
Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.
A new class of interpolatory $L$-splines with adjoint end conditions
Bejancu, Aurelian; Al-Sahli, Reyouf S.
2014-01-01
A thin plate spline surface for interpolation of smooth transfinite data prescribed along concentric circles was recently proposed by Bejancu, using Kounchev's polyspline method. The construction of the new `Beppo Levi polyspline' surface reduces, via separation of variables, to that of a countable family of univariate $L$-splines, indexed by the frequency integer $k$. This paper establishes the existence, uniqueness and variational properties of the `Beppo Levi $L$-spline' schemes correspond...
Communication Collocations of the Lexeme Geld in General and Business German
Directory of Open Access Journals (Sweden)
Mirna Hocenski-Dreiseidl
2010-07-01
Full Text Available The authors aim to analyse and compare the lexeme Geld and its collocations on the grammatical and semantic levels in general and in business German. A special emphasis will be put on the importance of the communicative function that this lexeme and its collocations have in the language of banking. The paper also has a practical purpose. Its applicability in teaching is envisaged to improve the communicative competence of students of economics.
Block Hybrid Collocation Method with Application to Fourth Order Differential Equations
Directory of Open Access Journals (Sweden)
Lee Ken Yap
2015-01-01
Full Text Available The block hybrid collocation method with three off-step points is proposed for the direct solution of fourth order ordinary differential equations. The interpolation and collocation techniques are applied on basic polynomial to generate the main and additional methods. These methods are implemented in block form to obtain the approximation at seven points simultaneously. Numerical experiments are conducted to illustrate the efficiency of the method. The method is also applied to solve the fourth order problem from ship dynamics.
Translating Legal Collocations in Contract Agreements by Iraqi EFL Students-Translators
Directory of Open Access Journals (Sweden)
Muntaha A. Abdulwahid
2017-01-01
Full Text Available Legal translation of contract agreements is a challenge to translators as it involves combining the literary translation with the technical terminological precision. In translating legal contract agreements, a legal translator must utilize the lexical or syntactic precision and, more importantly, the pragmatic awareness of the context. This will guarantee an overall communicative process and avoid inconsistency in legal translation. However, the inability of the translator to meet these two functions in translating the contract item not only affects the contractors’ comprehension of the contract item but also affects the parties’ contractual obligations. In light of this, the purpose of this study was to find out how legal collocations used in contract agreements are translated from Arabic into English by student-translators in terms of (1 purely technical, (2 semi-technical, and (3 everyday vocabulary collocations. For the data collection, a multiple-choice collocation test was used to be answered by 35 EFL Iraqi undergraduate translator-students to decide on the aspects of weaknesses and strengths of their translation, thus decide on the aspects of correction. The findings showed that these students had serious problems in translating legal collocations as they lack the linguistic knowledge and pragmatic awareness needed to achieve the legal meaning and effect. They were also unable to make a difference among the three categories of legal collocations, purely technical, semi-technical, and everyday vocabulary collocations. These students should be exposed to more legal translation practices to obtain the required experience needed for their future career.
Age of Acquisition Effects in Chinese EFL learners’ Delexicalized Verb and Collocation Acquisition
Directory of Open Access Journals (Sweden)
Miao Haiyan
2015-05-01
Full Text Available This paper investigates age of acquisition (AoA effects and the acquisition of delexicalized verbs and collocations in Chinese EFL learners, and explores the underlying reasons from the connectionist model for these learners’ acquisition characteristics. The data were collected through a translation test consisted of delexialized verb information section and English-Chinese and Chinese-English collocation parts, aiming to focus on Chinese EFL learners’ receptive and productive abilities respectively. As Chinese EFL is a nationally classroom-based practice beginning from early primary school, the pedagogical value and different phases of acquisition are thus taken into consideration in designing the translation test. Research results show that the effects of AoA are significant not only in the learners’ acquisition of individual delexicalized verbs but also in delexicalized collocations. Although learners have long begun to learn delexicalized verbs, their production indicates that early learning does not guarantee total acquisition, because their grasp of delexicalized verbs still stay at the senior middle school level. AoA effects significantly affect the recognition but not the production of collocations. Furthermore, a plateau effect occurs in learners’ acquisition of college-level delexicalized collocations, as their recognition and production have no processing advantages over earlier learned collocations.
Evaluating the performance of collocated optical disdrometers: LPM and PARSIVEL
Angulo-Martinez, Marta; Begueria, Santiago; Latorre, Borja
2017-04-01
Optical disdrometers are present weather sensors with the ability of providing integrate information of precipitation like intensity and reflectivity together with discrete information of drop sizes and velocities distribution (DSVD) of the hydrometeors crossing the laser beam sampling area. These sensors constitute a step forward in comparison with pluviometers towards a more complete characterisation of precipitation. Their use is spreading in many research fields for several applications. Understanding the differences from one another helps in the election of the sensor and point out limitations to be fixed in future versions. Four collocated optical disdrometers, two Laser Precipitation Monitors (LPM-Thies Clima) and two PARSIVEL, 1-minute measurements of 800 natural rainfall events were compared. Results showed a general agreement in integrated variables, like intensity or liquid water content. Nevertheless, comparing raw data, as the number of particles and DSVD, great differences were found. LPM generally measures more and smaller drops than PARSIVEL and this difference increases with rainfall intensity. These results may affect especially the reflectivity value every disdrometer provide. A complete description of the measurements obtained, quantifiying the differences is provided, indicating their possible sources.
Multi-element probabilistic collocation method in high dimensions
International Nuclear Information System (INIS)
Foo, Jasmine; Karniadakis, George Em
2010-01-01
We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order μ and the effective dimension ν, with ν<< N, and N the nominal dimension. Numerical tests for multi-dimensional integration and for stochastic elliptic problems suggest that ν≥μ for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.
Transport survey calculations using the spectral collocation method
International Nuclear Information System (INIS)
Painter, S.L.; Lyon, J.F.
1989-01-01
A novel transport survey code has been developed and is being used to study the sensitivity of stellarator reactor performance to various transport assumptions. Instead of following one of the usual approaches, the steady-state transport equation are solved in integral form using the spectral collocation method. This approach effectively combine the computational efficiency of global models with the general nature of 1-D solutions. A compact torsatron reactor test case was used to study the convergence properties and flexibility of the new method. The heat transport model combined Shaing's model for ripple-induced neoclassical transport, the Chang-Hinton model for axisymmetric neoclassical transport, and neoalcator scaling for anomalous electron heat flux. Alpha particle heating, radiation losses, classical electron-ion heat flow, and external heating were included. For the test problem, the method exhibited some remarkable convergence properties. As the number of basis functions was increased, the maximum, pointwise error in the integrated power balance decayed exponentially until the numerical noise level as reached. Better than 10% accuracy in the globally-averaged quantities was achieved with only 5 basis functions; better than 1% accuracy was achieved with 10 basis functions. The numerical method was also found to be very general. Extreme temperature gradients at the plasma edge which sometimes arise from the neoclassical models and are difficult to resolve with finite-difference methods were easily resolved. 8 refs., 6 figs
Collocated Dataglyphs for large-message storage and retrieval
Motwani, Rakhi C.; Breidenbach, Jeff A.; Black, John R.
2004-06-01
In contrast to the security and integrity of electronic files, printed documents are vulnerable to damage and forgery due to their physical nature. Researchers at Palo Alto Research Center utilize DataGlyph technology to render digital characteristics to printed documents, which provides them with the facility of tamper-proof authentication and damage resistance. This DataGlyph document is known as GlyphSeal. Limited DataGlyph carrying capacity per printed page restricted the application of this technology to a domain of graphically simple and small-sized single-paged documents. In this paper the authors design a protocol motivated by techniques from the networking domain and back-up strategies, which extends the GlyphSeal technology to larger-sized, graphically complex, multi-page documents. This protocol provides fragmentation, sequencing and data loss recovery. The Collocated DataGlyph Protocol renders large glyph messages onto multiple printed pages and recovers the glyph data from rescanned versions of the multi-page documents, even when pages are missing, reordered or damaged. The novelty of this protocol is the application of ideas from RAID to the domain of DataGlyphs. The current revision of this protocol is capable of generating at most 255 pages, if page recovery is desired and does not provide enough data density to store highly detailed images in a reasonable amount of page space.
Automatic Shape Control of Triangular B-Splines of Arbitrary Topology
Institute of Scientific and Technical Information of China (English)
Ying He; Xian-Feng Gu; Hong Qin
2006-01-01
Triangular B-splines are powerful and flexible in modeling a broader class of geometric objects defined over arbitrary, non-rectangular domains. Despite their great potential and advantages in theory, practical techniques and computational tools with triangular B-splines are less-developed. This is mainly because users have to handle a large number of irregularly distributed control points over arbitrary triangulation. In this paper, an automatic and efficient method is proposed to generate visually pleasing, high-quality triangular B-splines of arbitrary topology. The experimental results on several real datasets show that triangular B-splines are powerful and effective in both theory and practice.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
Thin-plate spline analysis of mandibular growth.
Franchi, L; Baccetti, T; McNamara, J A
2001-04-01
The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.
Preference learning with evolutionary Multivariate Adaptive Regression Spline model
DEFF Research Database (Denmark)
Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll
2015-01-01
This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...
C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization
Directory of Open Access Journals (Sweden)
Shengjun Liu
2015-01-01
Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.
Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity
Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul
2016-01-01
We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data, and naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods, i.e., we use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDE in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Matérn model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis. © 2016 SFoCM
Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity
Haji Ali, Abdul Lateef
2016-08-26
We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data, and naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods, i.e., we use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDE in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Matérn model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis. © 2016 SFoCM
Directory of Open Access Journals (Sweden)
Saudin Saudin
2017-05-01
Full Text Available The important role of collocation in learners’ language proficiency has been acknowledged widely. In Systemic Functional Linguistics (SFL, collocation is known as one prominent member of the super-ordinate lexical cohesion, which contributes significantly to the textual coherence, together with grammatical cohesion and structural cohesion (Halliday & Hasan, 1985. Collocation is also viewed as the hallmark of truly advanced English learners since the higher the learners’ proficiency is, the more they tend to use collocation (Bazzaz & Samad, 2011; Hsu, 2007; Zhang, 1993. Further, knowledge of collocation is regarded as part of the native speakers’ communicative competence (Bazzaz & Samad, 2011; and lack of the knowledge is the most important sign of foreignness among foreign language learners (McArthur, 1992; McCarthy, 1990. Taking the importance of collocation into account, this study is aimed to shed light on Indonesian EFL learners’ levels of collocational competence. In the study, the collocational competence is restricted to v+n and adj+n of collocation but broken down into productive and receptive competence, about which little work has been done (Henriksen, 2013. For this purpose, 49 second-year students of an English department in a state polytechnic were chosen as the subjects. Two sets of tests (filling in the blanks and multiple-choice were administered to obtain the data of the subjects’ levels of productive and receptive competence and to gain information of which type was more problematic for the learners. The test instruments were designed by referring to Brashi’s (2006 test model, and Koya’s (2003. In the analysis of the data, interpretive-qualitative method was used primarily to obtain broad explanatory information. The data analysis showed that the scores of productive competence were lower than those of receptive competence in both v+n and adj+n collocation. The analysis also revealed that the scores of productive
Directory of Open Access Journals (Sweden)
Corrado Dimauro
2010-11-01
Full Text Available Test day records for milk yield of 57,390 first lactation Canadian Holsteins were analyzed with a linear model that included the fixed effects of herd-test date and days in milk (DIM interval nested within age and calving season. Residuals from this model were analyzed as a new variable and fitted with a five parameter model, fourth-order Legendre polynomials, with linear, quadratic and cubic spline models with three knots. The fit of the models was rather poor, with about 30-40% of the curves showing an adjusted R-square lower than 0.20 across all models. Results underline a great difficulty in modelling individual deviations around the mean curve for milk yield. However, the Ali and Schaeffer (5 parameter model and the fourth-order Legendre polynomials were able to detect two basic shapes of individual deviations among the mean curve. Quadratic and, especially, cubic spline functions had better fitting performances but a poor predictive ability due to their great flexibility that results in an abrupt change of the estimated curve when data are missing. Parametric and orthogonal polynomials seem to be robust and affordable under this standpoint.
Cortes, Adriano Mauricio
2016-10-01
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show how the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.
Luo, G. Y.; Osypiw, D.; Irle, M.
2003-05-01
The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.
B-spline tight frame based force matching method
Yang, Jianbin; Zhu, Guanhua; Tong, Dudu; Lu, Lanyuan; Shen, Zuowei
2018-06-01
In molecular dynamics simulations, compared with popular all-atom force field approaches, coarse-grained (CG) methods are frequently used for the rapid investigations of long time- and length-scale processes in many important biological and soft matter studies. The typical task in coarse-graining is to derive interaction force functions between different CG site types in terms of their distance, bond angle or dihedral angle. In this paper, an ℓ1-regularized least squares model is applied to form the force functions, which makes additional use of the B-spline wavelet frame transform in order to preserve the important features of force functions. The B-spline tight frames system has a simple explicit expression which is useful for representing our force functions. Moreover, the redundancy of the system offers more resilience to the effects of noise and is useful in the case of lossy data. Numerical results for molecular systems involving pairwise non-bonded, three and four-body bonded interactions are obtained to demonstrate the effectiveness of our approach.
B-Spline Approximations of the Gaussian, their Gabor Frame Properties, and Approximately Dual Frames
DEFF Research Database (Denmark)
Christensen, Ole; Kim, Hong Oh; Kim, Rae Young
2017-01-01
We prove that Gabor systems generated by certain scaled B-splines can be considered as perturbations of the Gabor systems generated by the Gaussian, with a deviation within an arbitrary small tolerance whenever the order N of the B-spline is sufficiently large. As a consequence we show that for a...
Jonge, de R.; Zanten, van J.H.
2012-01-01
We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensor-product B-spline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the B-spline
Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei
2014-01-01
Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
International Nuclear Information System (INIS)
Hardy, David J.; Schulten, Klaus; Wolff, Matthew A.; Skeel, Robert D.; Xia, Jianlin
2016-01-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Directory of Open Access Journals (Sweden)
Shilpa Dilipkumar
2015-03-01
Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.
Impact of WhatsApp on Learning and Retention of Collocation Knowledge among Iranian EFL Learners
Directory of Open Access Journals (Sweden)
Zahra Ashiyan
2016-10-01
Full Text Available During the recent technological years, language learning has been attempted to transform its path from the conventional methods to instrumental applications. Mobile phone provides people to reach and exchange information through chats (WhatsApp. It is a tool or mode that means the facilities are used for main purposes. The unique features of the application are its compatibility to exchange information, enhance communication and relationship. A mobile phone provides to download, upload and store learning materials and information files. The purpose of the current study was to investigate the use and effect of mobile applications such as WhatsApp on school work and out of school work. In this way, Oxford Placement Test (OPT was conducted among 80 learners in order to select intermediate EFL learners. In total, 60 participants whose scores were 70 or higher were elected as the intermediate level and were divided into experimental and control groups. In order to control the reliability of the collocation pretest, the test was pilot studied on 15 learners. Then, the pretest was conducted to measure the learner’s collocation knowledge in both of the groups. The experimental group frequently installed WhatsApp application in order to learning and practicing new collocations in order to learning and practicing new collocations, while the control group did not use any tool for learning them. An immediate posttest after the treatment was administered. The results in each group were statistically evaluated and the findings manifested that the experimental group who used WhatsApp application in learning collocation significantly outperformed the control group in posttest. Thus usage of WhatsApp application to acquire collocations can reinforce and enhance the process of collocations acquisition and it can guarantee retention of collocations. This study also prepares pedagogical implications for utilizing mobile application as an influential instrument
English collocations: A novel approach to teaching the language's last bastion
Directory of Open Access Journals (Sweden)
Rafe S. Zaabalawi
2017-01-01
Full Text Available Collocations are a class of idiomatic expressions comprised of a sequence of words which, for mostly arbitrary reasons, occur together in a prescribed order. Collocations are not necessarily grammatical and/or cannot be generated through knowledge of rules or formulae. Therefore, they are often not easily mastered by EFL learners and typically only dealt with during the latter phase of second language apprenticeship. Literature has mostly examined the phenomenon of collocations from one of two perspectives. First, there are studies focusing on error analysis and contingent pedagogical advice. Second, there is research concerned with theory development; a genre associated with a specific methodological limitations. This study reports on data pertaining to a novel approach to learning collocations; one based on a learner's incidental discovery of such structures in written texts. Our research question is: will students who have been introduced to and practiced specific collocations in reading texts be inclined to naturally use such exemplars appropriately in novel/unfamiliar subsequent contexts? Findings have implications for EFL teachers and those concerned with curriculum development.
Liu, L. H.; Tan, J. Y.
2007-02-01
A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.
International Nuclear Information System (INIS)
Liu, L.H.; Tan, J.Y.
2007-01-01
A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media
International Nuclear Information System (INIS)
Islam, Tanvir; Srivastava, Prashant K.
2015-01-01
The cloud ice water path (IWP) is one of the major parameters that have a strong influence on earth's radiation budget. Onboard satellite sensors are recognized as valuable tools to measure the IWP in a global scale. Albeit, active sensors such as the Cloud Profiling Radar (CPR) onboard the CloudSat satellite has better capability to measure the ice water content profile, thus, its vertical integral, IWP, than any passive microwave (MW) or infrared (IR) sensors. In this study, we investigate the retrieval of IWP from MW and IR sensors, including AMSU-A, MHS, and HIRS instruments on-board the N19 satellite, such that the retrieval is consistent with the CloudSat IWP estimates. This is achieved through the collocations between the passive satellite measurements and CloudSat scenes. Potential benefit of synergistic multi-sensor multi-frequency retrieval is investigated. Two modeling approaches are explored for the IWP retrieval – generalized linear model (GLM) and neural network (NN). The investigation has been carried out over both ocean and land surface types. The MW/IR synergy is found to be retrieved more accurate IWP than the individual AMSU-A, MHS, or HIRS measurements. Both GLM and NN approaches have been able to exploit the synergistic retrievals. - Highlights: • MW/IR synergy is investigated for IWP retrieval. • The IWP retrieval is modeled using CloudSat collocations. • Two modeling approaches are explored – GLM and ANN. • MW/IR synergy performs better than the MW or IR only retrieval
Spline-based automatic path generation of welding robot
Institute of Scientific and Technical Information of China (English)
Niu Xuejuan; Li Liangyu
2007-01-01
This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.
From cardinal spline wavelet bases to highly coherent dictionaries
International Nuclear Information System (INIS)
Andrle, Miroslav; Rebollo-Neira, Laura
2008-01-01
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)
Examination of influential observations in penalized spline regression
Türkan, Semra
2013-10-01
In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.
Splines employment for inverse problem of nonstationary thermal conduction
International Nuclear Information System (INIS)
Nikonov, S.P.; Spolitak, S.I.
1985-01-01
An analytical solution has been obtained for an inverse problem of nonstationary thermal conduction which is faced in nonstationary heat transfer data processing when the rewetting in channels with uniform annular fuel element imitators is investigated. In solving the problem both boundary conditions and power density within the imitator are regularized via cubic splines constructed with the use of Reinsch algorithm. The solution can be applied for calculation of temperature distribution in the imitator and the heat flux in two-dimensional approximation (r-z geometry) under the condition that the rewetting front velocity is known, and in one-dimensional r-approximation in cases with negligible axial transport or when there is a lack of data about the temperature disturbance source velocity along the channel
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation
Directory of Open Access Journals (Sweden)
Budi I. Setiawan
2007-09-01
Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.
Extending Binary Collocations: (Lexicographical Implications of Going beyond the Prototypical a – b
Directory of Open Access Journals (Sweden)
Dušan Gabrovšek
2014-05-01
Full Text Available The paper focuses primarily on the Sinclairian concept of extended units of meaning in general and on extended collocations in particular, investigating their nature and types. Such extended units are extremely varied and diverse; they are regarded as instances of the functioning of the coselection principle. Some extended forms are used far more commonly that the corresponding prototypical (binary sequences. The final section delves into the ABCs of extended collocations in the context of lexicography, suggesting that dictionaries should make an effort to include a selection of such strings, especially for encoding tasks that are to be shown as examples of use. Most dictionaries incorporate very few such “loose” units, probably because of a powerful tradition to include as examples of use chiefly binary collocations and full sentences.
THE CASE FOR VERB-ADJECTIVE COLLOCATIONS: CORPUS-BASED ANALYSIS AND LEXICOGRAPHICAL TREATMENT
Directory of Open Access Journals (Sweden)
Moisés Almela
2011-10-01
Full Text Available This article explores a type of co-occurrence pattern which cannot be adequately described by existing models of collocation, and for which combinatory dictionaries have yet failed to provide sufficient information. The phenomenon of “oblique inter-collocation”, as I propose to call it, is characterised by a concatenation of syntagmatic preferences which partially contravenes the habitual grammatical order of semantic selection. In particular, I will examine some of the effects which the verb cause exerts on the distribution of attributive adjectives in the context of specific noun classes. The procedure for detecting and describing patterns of oblique inter-collocation is illustrated by means of SketchEngine corpus query tools. Based on the data extracted from a large-scale corpus, this paper carries out a critical analysis of the micro-structure in Oxford Collocations Dictionary.
Intensity-based hierarchical elastic registration using approximating splines.
Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C
2014-01-01
We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS
Joint surface modeling with thin-plate splines.
Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D
1999-10-01
Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.
Directory of Open Access Journals (Sweden)
Verónica S. Martínez
2015-12-01
Full Text Available Metabolic flux analysis (MFA is widely used to estimate intracellular fluxes. Conventional MFA, however, is limited to continuous cultures and the mid-exponential growth phase of batch cultures. Dynamic MFA (DMFA has emerged to characterize time-resolved metabolic fluxes for the entire culture period. Here, the linear DMFA approach was extended using B-spline fitting (B-DMFA to estimate mass balanced fluxes. Smoother fits were achieved using reduced number of knots and parameters. Additionally, computation time was greatly reduced using a new heuristic algorithm for knot placement. B-DMFA revealed that Chinese hamster ovary cells shifted from 37 °C to 32 °C maintained a constant IgG volume-specific productivity, whereas the productivity for the controls peaked during mid-exponential growth phase and declined afterward. The observed 42% increase in product titer at 32 °C was explained by a prolonged cell growth with high cell viability, a larger cell volume and a more stable volume-specific productivity. Keywords: Dynamic, Metabolism, Flux analysis, CHO cells, Temperature shift, B-spline curve fitting
Wilke, Marko
2018-02-01
This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.
Value of the New Spline QTc Formula in Adjusting for Pacing-Induced Changes in Heart Rate
Directory of Open Access Journals (Sweden)
Hirmand Nouraei
2018-01-01
Full Text Available Aims. To determine whether a new QTc calculation based on a Spline fit model derived and validated from a large population remained stable in the same individual across a range of heart rates (HRs. Second, to determine whether this formula incorporating QRS duration can be of value in QT measurement, compared to direct measurement of the JT interval, during ventricular pacing. Methods. Individuals (N=30; 14 males aged 51.9 ± 14.3 years were paced with decremental atrial followed by decremental ventricular pacing. Results. The new QTc changed minimally with shorter RR intervals, poorly fit even a linear relationship, and did not fit a second-order polynomial. In contrast, the Bazett formula (QTcBZT showed a steep and marked increase in QTc with shorter RR intervals. For atrial pacing data, QTcBZT was fit best by a second-order polynomial and demonstrated a dramatic increase in QTc with progressively shorter RR intervals. For ventricular pacing, the new QTc minus QRS duration did not meaningfully change with HR in contrast to the HR dependency of QTcBZT and JT interval. Conclusion. The new QT correction formula is minimally impacted by HR acceleration induced by atrial or ventricular pacing. The Spline QTc minus QRS duration is an excellent method to estimate QTc in ventricular paced complexes.
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
Directory of Open Access Journals (Sweden)
Winfried Auzinger
2006-01-01
Full Text Available We demonstrate that eigenvalue problems for ordinary differential equations can be recast in a formulation suitable for the solution by polynomial collocation. It is shown that the well-posedness of the two formulations is equivalent in the regular as well as in the singular case. Thus, a collocation code equipped with asymptotically correct error estimation and adaptive mesh selection can be successfully applied to compute the eigenvalues and eigenfunctions efficiently and with reliable control of the accuracy. Numerical examples illustrate this claim.
Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System
Directory of Open Access Journals (Sweden)
Guang Zhao
2008-01-01
Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.
Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering
Directory of Open Access Journals (Sweden)
Hyungjun Park
2015-01-01
Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.
Ying, Yang
2015-01-01
This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…
Development of quadrilateral spline thin plate elements using the B-net method
Chen, Juan; Li, Chong-Jun
2013-08-01
The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
Curve fitting and modeling with splines using statistical variable selection techniques
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Modeling and analysis of linear hyperbolic systems of balance laws
Bartecki, Krzysztof
2016-01-01
This monograph focuses on the mathematical modeling of distributed parameter systems in which mass/energy transport or wave propagation phenomena occur and which are described by partial differential equations of hyperbolic type. The case of linear (or linearized) 2 x 2 hyperbolic systems of balance laws is considered, i.e., systems described by two coupled linear partial differential equations with two variables representing physical quantities, depending on both time and one-dimensional spatial variable. Based on practical examples of a double-pipe heat exchanger and a transportation pipeline, two typical configurations of boundary input signals are analyzed: collocated, wherein both signals affect the system at the same spatial point, and anti-collocated, in which the input signals are applied to the two different end points of the system. The results of this book emerge from the practical experience of the author gained during his studies conducted in the experimental installation of a heat exchange cente...
A Bézier-Spline-based Model for the Simulation of Hysteresis in Variably Saturated Soil
Cremer, Clemens; Peche, Aaron; Thiele, Luisa-Bianca; Graf, Thomas; Neuweiler, Insa
2017-04-01
Most transient variably saturated flow models neglect hysteresis in the p_c-S-relationship (Beven, 2012). Such models tend to inadequately represent matrix potential and saturation distribution. Thereby, when simulating flow and transport processes, fluid and solute fluxes might be overestimated (Russo et al., 1989). In this study, we present a simple, computationally efficient and easily applicable model that enables to adequately describe hysteresis in the p_c-S-relationship for variably saturated flow. This model can be seen as an extension to the existing play-type model (Beliaev and Hassanizadeh, 2001), where scanning curves are simplified as vertical lines between main imbibition and main drainage curve. In our model, we use continuous linear and Bézier-Spline-based functions. We show the successful validation of the model by numerically reproducing a physical experiment by Gillham, Klute and Heermann (1976) describing primary drainage and imbibition in a vertical soil column. With a deviation of 3%, the simple Bézier-Spline-based model performs significantly better that the play-type approach, which deviates by 30% from the experimental results. Finally, we discuss the realization of physical experiments in order to extend the model to secondary scanning curves and in order to determine scanning curve steepness. {Literature} Beven, K.J. (2012). Rainfall-Runoff-Modelling: The Primer. John Wiley and Sons. Russo, D., Jury, W. A., & Butters, G. L. (1989). Numerical analysis of solute transport during transient irrigation: 1. The effect of hysteresis and profile heterogeneity. Water Resources Research, 25(10), 2109-2118. https://doi.org/10.1029/WR025i010p02109. Beliaev, A.Y. & Hassanizadeh, S.M. (2001). A Theoretical Model of Hysteresis and Dynamic Effects in the Capillary Relation for Two-phase Flow in Porous Media. Transport in Porous Media 43: 487. doi:10.1023/A:1010736108256. Gillham, R., Klute, A., & Heermann, D. (1976). Hydraulic properties of a porous
The Application of the Probabilistic Collocation Method to a Transonic Axial Flow Compressor
Loeven, G.J.A.; Bijl, H.
2010-01-01
In this paper the Probabilistic Collocation method is used for uncertainty quantification of operational uncertainties in a transonic axial flow compressor (i.e. NASA Rotor 37). Compressor rotors are components of a gas turbine that are highly sensitive to operational and geometrical uncertainties.
If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...
Strategies in Translating Collocations in Religious Texts from Arabic into English
Dweik, Bader S.; Shakra, Mariam M. Abu
2010-01-01
The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…
Implementation of optimal Galerkin and Collocation approximations of PDEs with Random Coefficients
Beck, Joakim; Nobile, F.; Tamellini, L.; Tempone, Raul
2011-01-01
We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new effective class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids.
Parallel algorithm of trigonometric collocation method in nonlinear dynamics of rotors
Czech Academy of Sciences Publication Activity Database
Musil, Tomáš; Jakl, Ondřej
2007-01-01
Roč. 1, č. 2 (2007), s. 555-564 ISSN 1802-680X. [Výpočtová mechanika 2007. Hrad Nečtiny, 05.11.2007-07.11.2007] Institutional research plan: CEZ:AV0Z20760514; CEZ:AV0Z30860518 Keywords : rotor system * trigonometric collocation * parallel computation Subject RIV: JR - Other Machinery
A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data
Babuška, Ivo; Nobile, Fabio; Tempone, Raul
2010-01-01
This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.
Babakhani, B.; de Vries, Theodorus J.A.; van Amerongen, J.
2012-01-01
In this paper, both collocated and noncollocated active vibration control (AVC) of a the vibrations in a motion system are considered. Pole-zero plots of both the AVC loop and the motion-control (MC) loop are used to analyze the effect of the applied active damping on the system dynamics. Using
A stochastic collocation method for the second order wave equation with a discontinuous random speed
Motamed, Mohammad
2012-08-31
In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical space and depends on a finite number of random variables. The numerical scheme consists of a finite difference or finite element method in the physical space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space. This approach leads to the solution of uncoupled deterministic problems as in the Monte Carlo method. We consider both full and sparse tensor product spaces of orthogonal polynomials. We provide a rigorous convergence analysis and demonstrate different types of convergence of the probability error with respect to the number of collocation points for full and sparse tensor product spaces and under some regularity assumptions on the data. In particular, we show that, unlike in elliptic and parabolic problems, the solution to hyperbolic problems is not in general analytic with respect to the random variables. Therefore, the rate of convergence may only be algebraic. An exponential/fast rate of convergence is still possible for some quantities of interest and for the wave solution with particular types of data. We present numerical examples, which confirm the analysis and show that the collocation method is a valid alternative to the more traditional Monte Carlo method for this class of problems. © 2012 Springer-Verlag.
Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence
Ait-Haddou, Rachid; Barton, Michael; Calo, Victor M.
2015-01-01
We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention
SPLINE-FUNCTIONS IN THE TASK OF THE FLOW AIRFOIL PROFILE
Directory of Open Access Journals (Sweden)
Mikhail Lopatjuk
2013-12-01
Full Text Available The method and the algorithm of solving the problem of streamlining are presented. Neumann boundary problem is reduced to the solution of integral equations with given boundary conditions using the cubic spline-functions
Modeling the dispersion of atmospheric pollution using cubic splines and chapeau functions
Energy Technology Data Exchange (ETDEWEB)
Pepper, D W; Kern, C D; Long, P E
1979-01-01
Two methods that can be used to solve complex, three-dimensional, advection-diffusion transport equations are investigated. A quasi-Lagrangian cubic spline method and a chapeau function method are compared in advecting a passive scalar. The methods are simple to use, computationally fast, and reasonably accurate. Little numerical dissipation is manifested by the schemes. In simple advection tests with equal mesh spacing, the chapeau function method maintains slightly more accurate peak values than the cubic spline method. In tests with unequal mesh spacing, the cubic spline method has less noise, but slightly more damping than the standard chapeau method has. Both cubic splines and chapeau functions can be used to solve the three-dimensional problem of gaseous emissions dispersion without excessive programing complexity or storage requirements. (10 diagrams, 39 references, 2 tables)
Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report
1978-01-01
Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.
International Nuclear Information System (INIS)
McCurdy, C William; MartIn, Fernando
2004-01-01
B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling
Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing
Directory of Open Access Journals (Sweden)
Lu Zhang
2017-06-01
Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The
International Nuclear Information System (INIS)
D’Amore, L; Campagna, R; Murli, A; Galletti, A; Marcellino, L
2012-01-01
The scientific and application-oriented interest in the Laplace transform and its inversion is testified by more than 1000 publications in the last century. Most of the inversion algorithms available in the literature assume that the Laplace transform function is available everywhere. Unfortunately, such an assumption is not fulfilled in the applications of the Laplace transform. Very often, one only has a finite set of data and one wants to recover an estimate of the inverse Laplace function from that. We propose a fitting model of data. More precisely, given a finite set of measurements on the real axis, arising from an unknown Laplace transform function, we construct a dth degree generalized polynomial smoothing spline, where d = 2m − 1, such that internally to the data interval it is a dth degree polynomial complete smoothing spline minimizing a regularization functional, and outside the data interval, it mimics the Laplace transform asymptotic behavior, i.e. it is a rational or an exponential function (the end behavior model), and at the boundaries of the data set it joins with regularity up to order m − 1, with the end behavior model. We analyze in detail the generalized polynomial smoothing spline of degree d = 3. This choice was motivated by the (ill)conditioning of the numerical computation which strongly depends on the degree of the complete spline. We prove existence and uniqueness of this spline. We derive the approximation error and give a priori and computable bounds of it on the whole real axis. In such a way, the generalized polynomial smoothing spline may be used in any real inversion algorithm to compute an approximation of the inverse Laplace function. Experimental results concerning Laplace transform approximation, numerical inversion of the generalized polynomial smoothing spline and comparisons with the exponential smoothing spline conclude the work. (paper)
On the accurate fast evaluation of finite Fourier integrals using cubic splines
International Nuclear Information System (INIS)
Morishima, N.
1993-01-01
Finite Fourier integrals based on a cubic-splines fit to equidistant data are shown to be evaluated fast and accurately. Good performance, especially on computational speed, is achieved by the optimization of the spline fit and the internal use of the fast Fourier transform (FFT) algorithm for complex data. The present procedure provides high accuracy with much shorter CPU time than a trapezoidal FFT. (author)
Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline
Directory of Open Access Journals (Sweden)
Ravi Kanth A.S.V.
2016-01-01
Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases
Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre
2011-12-01
Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.
Final report on Production Test No. 105-245-P -- Effectiveness of cadmium coated splines
Energy Technology Data Exchange (ETDEWEB)
Carson, A.B.
1949-05-19
This report discussed cadmium coated splines which have been developed to supplement the regular control rod systems under emergency shutdown conditions from higher power levels. The objective of this test was to determine the effectiveness of one such spline placed in a tube in the central zone of a pile, and of two splines in the same tube. In addition, the process control group of the P Division asked that probable spline requirements for safe operation at various power levels be estimated, and the details included in this report. The results of the test indicated a reactivity value of 10.5 {plus_minus} 1.0 ih for a single spline, and 19.0 ih {plus_minus} 1.0 ihfor two splines in tube 1674-B under the loading conditions of 4-27-49, the date of the test. The temperature rise of the cooling water for this tube under these conditions was found to be 37.2{degrees}C for 275 MW operation.
Directory of Open Access Journals (Sweden)
Mustapha Hajebi
2017-12-01
Full Text Available The present study investigates the correlation between language proficiency, collocations and the role of L1 transfer with collocations. This is a quantitative research. The research places more emphases on collecting data in the form of numbers. It is also experimental research in the sense that it tests participants to measure their variables. The participants of the study were 57 Persian B.A students, both male and female from Islamic Azad University of Bandar Abbas, Iran. The results showed that there is a significant relationship between Iranian subjects’ language proficiency, as measured by the Michigan proficiency test and their knowledge of collocations, as measured by their performance on a collocation test designed for the current study. The results obtained from the research indicate that Iranian EFL learners are more likely to use the right collocation in cases of L1 transfer. This suggests that positive transfer plays a major role when it comes to EFL learners’ ability to produce the right collocations in their L2. The findings of this study have some implications for language teaching. Teachers can put emphasis on the inclusion of selected grammatical and lexical collocations in reading comprehension passages.
Systems of Inhomogeneous Linear Equations
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
International Nuclear Information System (INIS)
Li, Yanting; He, Yong; Su, Yan; Shu, Lianjie
2016-01-01
Highlights: • Suggests a nonparametric model based on MARS for output power prediction. • Compare the MARS model with a wide variety of prediction models. • Show that the MARS model is able to provide an overall good performance in both the training and testing stages. - Abstract: Both linear and nonlinear models have been proposed for forecasting the power output of photovoltaic systems. Linear models are simple to implement but less flexible. Due to the stochastic nature of the power output of PV systems, nonlinear models tend to provide better forecast than linear models. Motivated by this, this paper suggests a fairly simple nonlinear regression model known as multivariate adaptive regression splines (MARS), as an alternative to forecasting of solar power output. The MARS model is a data-driven modeling approach without any assumption about the relationship between the power output and predictors. It maintains simplicity of the classical multiple linear regression (MLR) model while possessing the capability of handling nonlinearity. It is simpler in format than other nonlinear models such as ANN, k-nearest neighbors (KNN), classification and regression tree (CART), and support vector machine (SVM). The MARS model was applied on the daily output of a grid-connected 2.1 kW PV system to provide the 1-day-ahead mean daily forecast of the power output. The comparisons with a wide variety of forecast models show that the MARS model is able to provide reliable forecast performance.
Evaluation of the spline reconstruction technique for PET
Energy Technology Data Exchange (ETDEWEB)
Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Fernández, Yolanda [Centre d’Imatge Molecular Experimental (CIME), CETIR-ERESA, Barcelona 08950 (Spain); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London NW1 2BU (United Kingdom); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA (United Kingdom)
2014-04-15
Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real
2015-12-01
ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c
Directory of Open Access Journals (Sweden)
Vasily A. Belyaev
2017-01-01
Full Text Available The new versions of the collocations and least residuals (CLR method of high-order accuracy are proposed and implemented for the numerical solution of the boundary value problems for PDE in the convex quadrangular domains. Their implementation and numerical experiments are performed by the examples of solving the biharmonic and Poisson equations. The solution of the biharmonic equation is used for simulation of the stress-strain state of an isotropic plate under the action of the transverse load. Differential problems are projected into the space of fourth-degree polynomials by the CLR method. The boundary conditions for the approximate solution are put down exactly on the boundary of the computational domain. The versions of the CLR method are implemented on the grids, which are constructed by two different ways. In the first version, a “quasiregular” grid is constructed in the domain, the extreme lines of this grid coincide with the boundaries of the domain. In the second version, the domain is initially covered by a regular grid with rectangular cells. Herewith, the collocation and matching points that are situated outside the domain are used for approximation of the differential equations in the boundary cells that had been crossed by the boundary. In addition the “small” irregular triangular cells that had been cut off by the domain boundary from rectangular cells of the initial regular grid are joined to adjacent quadrangular cells. This technique allowed to essentially reduce the conditionality of the system of linear algebraic equations of the approximate problem in comparison with the case when small irregular cells together with other cells were used as independent ones for constructing an approximate solution of the problem. It is shown that the approximate solution of problems converges with high order and matches with high accuracy with the analytical solution of the test problems in the case of the known solution in
A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty
Malenova, G.; Motamed, M.; Runborg, O.; Tempone, Raul
2016-01-01
We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.
A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty
Malenova, G.
2016-09-08
We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.
Robust Topology Optimization Based on Stochastic Collocation Methods under Loading Uncertainties
Directory of Open Access Journals (Sweden)
Qinghai Zhao
2015-01-01
Full Text Available A robust topology optimization (RTO approach with consideration of loading uncertainties is developed in this paper. The stochastic collocation method combined with full tensor product grid and Smolyak sparse grid transforms the robust formulation into a weighted multiple loading deterministic problem at the collocation points. The proposed approach is amenable to implementation in existing commercial topology optimization software package and thus feasible to practical engineering problems. Numerical examples of two- and three-dimensional topology optimization problems are provided to demonstrate the proposed RTO approach and its applications. The optimal topologies obtained from deterministic and robust topology optimization designs under tensor product grid and sparse grid with different levels are compared with one another to investigate the pros and cons of optimization algorithm on final topologies, and an extensive Monte Carlo simulation is also performed to verify the proposed approach.
International Nuclear Information System (INIS)
Avila, Gustavo; Carrington, Tucker
2015-01-01
In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates
Approximate solutions of the hyperchaotic Rössler system by using the Bessel collocation scheme
Directory of Open Access Journals (Sweden)
Şuayip Yüzbaşı
2015-02-01
Full Text Available The purpose of this study is to give a Bessel polynomial approximation for the solutions of the hyperchaotic Rössler system. For this purpose, the Bessel collocation method applied to different problems is developed for the mentioned system. This method is based on taking the truncated Bessel expansions of the functions in the hyperchaotic Rössler systems. The suggested secheme converts the problem into a system of nonlinear algebraic equations by means of the matrix operations and collocation points, The accuracy and efficiency of the proposed approach are demonstrated by numerical applications and performed with the help of a computer code written in Maple. Also, comparison between our method and the differential transformation method is made with the accuracy of solutions.
Khan, Sami Ullah; Ali, Ishtiaq
2018-03-01
Explicit solutions to delay differential equation (DDE) and stochastic delay differential equation (SDDE) can rarely be obtained, therefore numerical methods are adopted to solve these DDE and SDDE. While on the other hand due to unstable nature of both DDE and SDDE numerical solutions are also not straight forward and required more attention. In this study, we derive an efficient numerical scheme for DDE and SDDE based on Legendre spectral-collocation method, which proved to be numerical methods that can significantly speed up the computation. The method transforms the given differential equation into a matrix equation by means of Legendre collocation points which correspond to a system of algebraic equations with unknown Legendre coefficients. The efficiency of the proposed method is confirmed by some numerical examples. We found that our numerical technique has a very good agreement with other methods with less computational effort.
International Nuclear Information System (INIS)
Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.
2007-01-01
PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data
On the hybrid stability of the collocated virtual holonomic constraints basedwalking design
Czech Academy of Sciences Publication Activity Database
Anderle, Milan; Čelikovský, Sergej
2017-01-01
Roč. 6, č. 2 (2017), s. 47-56 ISSN 2223-7038 R&D Projects: GA ČR(CZ) GA17-04682S Institutional support: RVO:67985556 Keywords : Underactuated walking * Virtual holonomic constraints * Poincaré section method * collocated constraints Subject RIV: BC - Control Systems Theory OBOR OECD: Automation and control systems http://lib.physcon.ru/doc?id=60655c1961ed
International Nuclear Information System (INIS)
Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.
2012-01-01
PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)
Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics
MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier
2017-01-01
International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...
Energy Technology Data Exchange (ETDEWEB)
Verdu, G. [Departamento de Ingenieria Quimica Y Nuclear, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain); Capilla, M.; Talavera, C. F.; Ginestar, D. [Dept. of Nuclear Engineering, Departamento de Matematica Aplicada, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain)
2012-07-01
PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)
A Legendre Wavelet Spectral Collocation Method for Solving Oscillatory Initial Value Problems
Directory of Open Access Journals (Sweden)
A. Karimi Dizicheh
2013-01-01
wavelet suitable for large intervals, and then the Legendre-Guass collocation points of the Legendre wavelet are derived. Using this strategy, the iterative spectral method converts the differential equation to a set of algebraic equations. Solving these algebraic equations yields an approximate solution for the differential equation. The proposed method is illustrated by some numerical examples, and the result is compared with the exponentially fitted Runge-Kutta method. Our proposed method is simple and highly accurate.
A Numerical Method for Lane-Emden Equations Using Hybrid Functions and the Collocation Method
Directory of Open Access Journals (Sweden)
Changqing Yang
2012-01-01
Full Text Available A numerical method to solve Lane-Emden equations as singular initial value problems is presented in this work. This method is based on the replacement of unknown functions through a truncated series of hybrid of block-pulse functions and Chebyshev polynomials. The collocation method transforms the differential equation into a system of algebraic equations. It also has application in a wide area of differential equations. Corresponding numerical examples are presented to demonstrate the accuracy of the proposed method.
Directory of Open Access Journals (Sweden)
Salih Yalcinbas
2016-01-01
Full Text Available In this study, a numerical approach is proposed to obtain approximate solutions of nonlinear system of second order boundary value problem. This technique is essentially based on the truncated Fermat series and its matrix representations with collocation points. Using the matrix method, we reduce the problem system of nonlinear algebraic equations. Numerical examples are also given to demonstrate the validity and applicability of the presented technique. The method is easy to implement and produces accurate results.
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
Energy Technology Data Exchange (ETDEWEB)
Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.
Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.
2018-06-01
We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.
Space cutter compensation method for five-axis nonuniform rational basis spline machining
Directory of Open Access Journals (Sweden)
Yanyu Ding
2015-07-01
Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.
Directory of Open Access Journals (Sweden)
Neng Wan
2014-01-01
Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.
An adaptive multi-element probabilistic collocation method for statistical EMC/EMI characterization
Yücel, Abdulkadir C.
2013-12-01
An adaptive multi-element probabilistic collocation (ME-PC) method for quantifying uncertainties in electromagnetic compatibility and interference phenomena involving electrically large, multi-scale, and complex platforms is presented. The method permits the efficient and accurate statistical characterization of observables (i.e., quantities of interest such as coupled voltages) that potentially vary rapidly and/or are discontinuous in the random variables (i.e., parameters that characterize uncertainty in a system\\'s geometry, configuration, or excitation). The method achieves its efficiency and accuracy by recursively and adaptively dividing the domain of the random variables into subdomains using as a guide the decay rate of relative error in a polynomial chaos expansion of the observables. While constructing local polynomial expansions on each subdomain, a fast integral-equation-based deterministic field-cable-circuit simulator is used to compute the observable values at the collocation/integration points determined by the adaptive ME-PC scheme. The adaptive ME-PC scheme requires far fewer (computationally costly) deterministic simulations than traditional polynomial chaos collocation and Monte Carlo methods for computing averages, standard deviations, and probability density functions of rapidly varying observables. The efficiency and accuracy of the method are demonstrated via its applications to the statistical characterization of voltages in shielded/unshielded microwave amplifiers and magnetic fields induced on car tire pressure sensors. © 2013 IEEE.
Directory of Open Access Journals (Sweden)
Postolea Sorina
2016-12-01
Full Text Available The research devoted to special languages as well as the activities carried out in specialized translation classes tend to focus primarily on one-word or multi-word terminological units. However, a very important part in the making of specialist registers and texts is played by specialised collocations, i.e. relatively stable word combinations that do not designate concepts but are nevertheless of frequent use in a given field of activity. This is why helping students acquire competences relative to the identification and processing of collocations should become an important objective in specialised translation classes. An easily accessible and dependable resource that may be successfully used to this purpose is represented by corpora and corpus analysis tools, whose usefulness in translator training has been highlighted by numerous studies. This article proposes a series of practical, task-based activities-developed with the help of a small-size parallel corpus of specialised texts-that aim to raise the translation trainees′ awareness of the collocations present in specialised texts and to provide suggestions about their processing in translation.
Directory of Open Access Journals (Sweden)
Elaheh Hamed Mahvelati
2012-11-01
Full Text Available Many researchers stress the importance of lexical coherence and emphasize the need for teaching collocations at all levels of language proficiency. Thus, this study was conducted to measure the relative effectiveness of explicit (consciousness-raising approach versus implicit (input flood collocation instruction with regard to learners’ knowledge of both lexical and grammatical collocations. Ninety-five upper-intermediate learners, who were randomly assigned to the control and experimental groups, served as the participants of this study. While one of the experimental groups was provided with input flood treatment, the other group received explicit collocation instruction. In contrast, the participants in the control group did not receive any instruction on learning collocations. The results of the study, which were collected through pre-test, immediate post-test and delayed post-test, revealed that although both methods of teaching collocations proved effective, the explicit method of consciousness-raising approach was significantly superior to the implicit method of input flood treatment.
Energy Technology Data Exchange (ETDEWEB)
Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael
2000-04-11
A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy
Pauchard, Y; Smith, M; Mintchev, M
2004-01-01
Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.
T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM
Directory of Open Access Journals (Sweden)
Zhenhua Han
2017-10-01
Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.
Splines and their reciprocal-bases in volume-integral equations
International Nuclear Information System (INIS)
Sabbagh, H.A.
1993-01-01
The authors briefly outline the use of higher-order splines and their reciprocal-bases in discretizing the volume-integral equations of electromagnetics. The discretization is carried out by means of the method of moments, in which the expansion functions are the higher-order splines, and the testing functions are the corresponding reciprocal-basis functions. These functions satisfy an orthogonality condition with respect to the spline expansion functions. Thus, the method is not Galerkin, but the structure of the resulting equations is quite regular, nevertheless. The theory is applied to the volume-integral equations for the unknown current density, or unknown electric field, within a scattering body, and to the equations for eddy-current nondestructive evaluation. Numerical techniques for computing the matrix elements are also given
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
[Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].
Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing
2003-12-01
Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications
International Nuclear Information System (INIS)
Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em
2008-01-01
Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L 2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods
Approximation and geomatric modeling with simplex B-splines associates with irregular triangular
Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.
1991-01-01
Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
Isogeometric finite element data structures based on Bézier extraction of T-splines
Scott, M.A.; Borden, M.J.; Verhoosel, C.V.; Sederberg, T.W.; Hughes, T.J.R.
2011-01-01
We develop finite element data structures for T-splines based on Bézier extraction generalizing our previous work for NURBS. As in traditional finite element analysis, the extracted Bézier elements are defined in terms of a fixed set of polynomial basis functions, the so-called Bernstein basis. The
Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs
Howell, Lauren R.; Allen, B. Danette
2016-01-01
A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.
A thin-plate spline analysis of the face and tongue in obstructive sleep apnea patients.
Pae, E K; Lowe, A A; Fleetham, J A
1997-12-01
The shape characteristics of the face and tongue in obstructive sleep apnea (OSA) patients were investigated using thin-plate (TP) splines. A relatively new analytic tool, the TP spline method, provides a means of size normalization and image analysis. When shape is one's main concern, various sizes of a biologic structure may be a source of statistical noise. More seriously, the strong size effect could mask underlying, actual attributes of the disease. A set of size normalized data in the form of coordinates was generated from cephalograms of 80 male subjects. The TP spline method envisioned the differences in the shape of the face and tongue between OSA patients and nonapneic subjects and those between the upright and supine body positions. In accordance with OSA severity, the hyoid bone and the submental region positioned inferiorly and the fourth vertebra relocated posteriorly with respect to the mandible. This caused a fanlike configuration of the lower part of the face and neck in the sagittal plane in both upright and supine body positions. TP splines revealed tongue deformations caused by a body position change. Overall, the new morphometric tool adopted here was found to be viable in the analysis of morphologic changes.
Zhang, X.; Liang, S.; Wang, G.
2015-12-01
Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.
Fingerprint Matching by Thin-plate Spline Modelling of Elastic Deformations
Bazen, A.M.; Gerez, Sabih H.
2003-01-01
This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching
Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions
International Nuclear Information System (INIS)
Zhu Menghua; Liu Lianggang; Qi Dongxu; You Zhong; Xu Aoao
2009-01-01
In this paper, the least square fitting method with the cubic B-spline basis functions is derived to reduce the influence of statistical fluctuations in the gamma ray spectra. The derived procedure is simple and automatic. The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation. (authors)
Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities
Directory of Open Access Journals (Sweden)
Rakowski Waldemar
2015-12-01
Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.
Numerical Solution of the Blasius Viscous Flow Problem by Quartic B-Spline Method
Directory of Open Access Journals (Sweden)
Hossein Aminikhah
2016-01-01
Full Text Available A numerical method is proposed to study the laminar boundary layer about a flat plate in a uniform stream of fluid. The presented method is based on the quartic B-spline approximations with minimizing the error L2-norm. Theoretical considerations are discussed. The computed results are compared with some numerical results to show the efficiency of the proposed approach.
Integration by cell algorithm for Slater integrals in a spline basis
International Nuclear Information System (INIS)
Qiu, Y.; Fischer, C.F.
1999-01-01
An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems
Klein, S.; Staring, M.; Pluim, J.P.W.
2007-01-01
A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic B-splines. The coordinate mapping that relates the two images is found using an iterative optimization procedure.
Directory of Open Access Journals (Sweden)
Mohsen Shahrokhi
2014-05-01
Full Text Available The present study investigates the extent to which lexical and grammatical collocations are used in Iranian high school English textbooks, compared with the American English File books. To achieve the purposes of this study, this study had to be carried out in two phases. In the first phase, the content of the instructional textbooks, that is, American English File book series, Book 2 and Iranian high school English Book 3, were analyzed to find the frequencies and proportions of the collocations used in the textbooks. Since the instructional textbooks used in the two teaching environments (i.e., Iranian high schools and language institutes were not equal with regard to the density of texts, from each textbook just the first 6000 words, content words as well as function words, were considered. Then, the frequencies of the collocations among the first 6000 words in high school English Book 3 and American English File Book 2 were determined.The results of the statistical analyses revealed that the two text book series differ marginally in terms of frequency and type of collocations. Major difference existed between them when it came to lexical collocations in American English File book 2.
Implementation of optimal Galerkin and Collocation approximations of PDEs with Random Coefficients
Beck, Joakim
2011-12-22
In this work we first focus on the Stochastic Galerkin approximation of the solution u of an elliptic stochastic PDE. We rely on sharp estimates for the decay of the coefficients of the spectral expansion of u on orthogonal polynomials to build a sequence of polynomial subspaces that features better convergence properties compared to standard polynomial subspaces such as Total Degree or Tensor Product. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new effective class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids.
Metaphors in terminological collocations in English language and their equivalents in Serbian
Directory of Open Access Journals (Sweden)
Orčić Lidija S.
2017-01-01
Full Text Available The framework of this paper is the theory of conceptual metaphor where metaphor is the transfer of a more concrete source domain into a more abstract target domain. Metaphor is a fundamental human ability to speak about abstract concepts using specific terms where the meaning of a term is transferred to another, thus achieving semantic extensions. Although it was thought that in terminology polysemantic expressions are not desirable, in recent decades this traditional view has been abandoned. Metaphor is used not only as a linguistic decoration in language, but as a means of argumentation. It may be noted that the metaphor, as a universal phenomenon, is also common in business English discourse. The subject of our interest is to investigate collocations made up of those nouns and adjectives, which, according to the Oxford Business English Dictionary for Learners of English, are most frequently used in this field. The main objective of this work is to identify and analyze the source and target domains in metaphors in English collocations that contain these nouns and adjectives, and detect mechanisms applied in translating into Serbian. We categorised metaphors in collocations into four groups. The first group consists of metaphors in which the source domain is expressed with the living beings: inanimate entities are described as if they were alive. In these examples, the personification is used to explain abstract concepts, forces and processes in order to present them in a more understandable way. The second group consists of metaphors in which animals are the source domain and their behavior and characteristics serve as a starting point. In business discourse people and institutions are described with such metaphors. In the third group we included the metaphors based on objects that users are familiar with in everyday life. The fourth group consists of metaphors in which the source domain are natural phenomena. When translating a metaphor we
Energy Technology Data Exchange (ETDEWEB)
Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Directory of Open Access Journals (Sweden)
Pentti Järvi
2004-10-01
Full Text Available This study addresses analysing quarterly reports from a brandtheoretical viewpoint. The study addresses the issue through a method which introduces both a quantitative tool based on linguistic theory and qualitative decisions of the researchers. The research objects of this study are two quarterly reports each of three telecommunications companies: Ericsson, Motorola and Nokia. The method used is a collocational network. The analyses show that there are differences in communication and message strategies among investigated companies and also changes during a quite short period in each company
Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method
Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad
2018-03-01
An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.
A nodal collocation method for the calculation of the lambda modes of the P L equations
International Nuclear Information System (INIS)
Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdu, G.
2005-01-01
P L equations are classical approximations to the neutron transport equation admitting a diffusive form. Using this property, a nodal collocation method is developed for the P L approximations, which is based on the expansion of the flux in terms of orthonormal Legendre polynomials. This method approximates the differential lambda modes problem by an algebraic eigenvalue problem from which the fundamental and the subcritical modes of the system can be calculated. To test the performance of this method, two problems have been considered, a homogeneous slab, which admits an analytical solution, and a seven-region slab corresponding to a more realistic problem
Modelling and Simulation of a Packed Bed of Pulp Fibers Using Mixed Collocation Method
Directory of Open Access Journals (Sweden)
Ishfaq Ahmad Ganaie
2013-01-01
Full Text Available A convenient computational approach for solving mathematical model related to diffusion dispersion during flow through packed bed is presented. The algorithm is based on the mixed collocation method. The method is particularly useful for solving stiff system arising in chemical and process engineering. The convergence of the method is found to be of order 2 using the roots of shifted Chebyshev polynomial. Model is verified using the literature data. This method has provided a convenient check on the accuracy of the results for wide range of parameters, namely, Peclet numbers. Breakthrough curves are plotted to check the effect of Peclet number on average and exit solute concentrations.
A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation
Jones, Brandon A.; Anderson, Rodney L.
2012-01-01
Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.
Dehghan, Mehdi; Mohammadi, Vahid
2017-03-01
As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.
Trajectory control of an articulated robot with a parallel drive arm based on splines under tension
Yi, Seung-Jong
Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and
Prostate multimodality image registration based on B-splines and quadrature local energy.
Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice
2012-05-01
Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.
DEFF Research Database (Denmark)
E. Fischer, Joel; Porcheron, Martin; Lucero, Andrés
2016-01-01
interactions. Yet, new challenges abound as people wear and carry more devices than ever, creating fragmented device ecologies at work, and changing the ways we socialise with each other. In this workshop we seek to start a dialogue to look back as well as forward, review best practices, discuss and design......In the 25 years since Ellis, Gibbs, and Rein proposed the time-space taxonomy, research in the ‘same time, same place’ quadrant has diversified, perhaps even fragmented. This one-day workshop will bring together researchers with diverse, yet convergent interests in tabletop, surface, mobile...
Directory of Open Access Journals (Sweden)
Zhao-Qing Wang
2014-01-01
Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.
An implicit meshless scheme for the solution of transient non-linear Poisson-type equations
Bourantas, Georgios
2013-07-01
A meshfree point collocation method is used for the numerical simulation of both transient and steady state non-linear Poisson-type partial differential equations. Particular emphasis is placed on the application of the linearization method with special attention to the lagging of coefficients method and the Newton linearization method. The localized form of the Moving Least Squares (MLS) approximation is employed for the construction of the shape functions, in conjunction with the general framework of the point collocation method. Computations are performed for regular nodal distributions, stressing the positivity conditions that make the resulting system stable and convergent. The accuracy and the stability of the proposed scheme are demonstrated through representative and well-established benchmark problems. © 2013 Elsevier Ltd.
An implicit meshless scheme for the solution of transient non-linear Poisson-type equations
Bourantas, Georgios; Burganos, Vasilis N.
2013-01-01
A meshfree point collocation method is used for the numerical simulation of both transient and steady state non-linear Poisson-type partial differential equations. Particular emphasis is placed on the application of the linearization method with special attention to the lagging of coefficients method and the Newton linearization method. The localized form of the Moving Least Squares (MLS) approximation is employed for the construction of the shape functions, in conjunction with the general framework of the point collocation method. Computations are performed for regular nodal distributions, stressing the positivity conditions that make the resulting system stable and convergent. The accuracy and the stability of the proposed scheme are demonstrated through representative and well-established benchmark problems. © 2013 Elsevier Ltd.
An h-adaptive stochastic collocation method for stochastic EMC/EMI analysis
Yücel, Abdulkadir C.
2010-07-01
The analysis of electromagnetic compatibility and interference (EMC/EMI) phenomena is often fraught by randomness in a system\\'s excitation (e.g., the amplitude, phase, and location of internal noise sources) or configuration (e.g., the routing of cables, the placement of electronic systems, component specifications, etc.). To bound the probability of system malfunction, fast and accurate techniques to quantify the uncertainty in system observables (e.g., voltages across mission-critical circuit elements) are called for. Recently proposed stochastic frameworks [1-2] combine deterministic electromagnetic (EM) simulators with stochastic collocation (SC) methods that approximate system observables using generalized polynomial chaos expansion (gPC) [3] (viz. orthogonal polynomials spanning the entire random domain) to estimate their statistical moments and probability density functions (pdfs). When constructing gPC expansions, the EM simulator is used solely to evaluate system observables at collocation points prescribed by the SC-gPC scheme. The frameworks in [1-2] therefore are non-intrusive and straightforward to implement. That said, they become inefficient and inaccurate for system observables that vary rapidly or are discontinuous in the random variables (as their representations may require very high-order polynomials). © 2010 IEEE.
Comparison of multiphase mixing simulations performed on a staggered and a collocated grid
International Nuclear Information System (INIS)
Leskovar, M.
2000-01-01
During a severe reactor accident following core meltdown when the molten fuel comes into contact with the coolant water a steam explosion may occur. The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurrence. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. To analyze the cold premixing experiments the computer code ESE has been developed. The specialty of ESE is that it uses a combined single-multiphase flow model. Because of problems with the convergence of the momentum equation written in conservative form on a staggered grid, the development of a collocated grid version of ESE was planed. But since we obtained the commercial code CFX-4.3, which uses a collocated variable arrangement, we decided first to test the capabilities of CFX-4.3. With ESE and CFX-4.3 the cold premixing experiment Q08 has been simulated. In the paper the simulation results performed with both codes are presented and commented in comparison to experimental data. (author)
ANALYSIS OF SPECIALISED COLLOCATIONS IN THE AREA OF REMOTE SENSING IN THE PERSPECTIVE OF PHRASEOLOGY
Directory of Open Access Journals (Sweden)
Diva Cardoso de CAMARGO
2013-12-01
Full Text Available The aim of this research is to build and analyze a parallel corpus in the field of remote sensing in order to identify, according to its frequency, specialized collocations in English and then search for their equivalents in Portuguese. The research is based on the interdisciplinary approach of Corpus-Based Translation Studies (BAKER, 1995; CAMARGO, 2007, Corpus Linguistics (BERBER SARDINHA, 2004; TOGNINI-BONELLI, 2001, Phraseology (ORENHA-OTTAIANO, 2009; PAVEL, 1993, and some principles of Terminology (BARROS, 2004. For manipulating the corpora, the program WordSmith Tools (SCOTT, 2012 version 6.0 is used. To support this study, two comparable corpora in English and Portuguese were also built from articles published in both national and international journals in remote sensing. The results show that the collocations in Portuguese seem to be still in the process of conventionalization, as the translators made use of greater variation in their translational options, which can be a way to make the text clearer for the reader.
Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.
2015-01-01
Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for Burgers' and the compressible Navier-Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [1, 2], extends the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to a combination of tensor product Legendre-Gauss (LG) and LGL points. The new semi-discrete operators discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality for both Burgers' and the compressible Navier-Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly to implement. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinearly stability proof for the compressible Navier-Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).
Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors
Directory of Open Access Journals (Sweden)
Pham Thuy Dung
2016-12-01
Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners
Directory of Open Access Journals (Sweden)
Ali H. Bhrawy
2014-01-01
Full Text Available The modified generalized Laguerre-Gauss collocation (MGLC method is applied to obtain an approximate solution of fractional neutral functional-differential equations with proportional delays on the half-line. The proposed technique is based on modified generalized Laguerre polynomials and Gauss quadrature integration of such polynomials. The main advantage of the present method is to reduce the solution of fractional neutral functional-differential equations into a system of algebraic equations. Reasonable numerical results are achieved by choosing few modified generalized Laguerre-Gauss collocation points. Numerical results demonstrate the accuracy, efficiency, and versatility of the proposed method on the half-line.
Directory of Open Access Journals (Sweden)
Navnit Jha
2014-04-01
Full Text Available An efficient numerical method based on quintic nonpolynomial spline basis and high order finite difference approximations has been presented. The scheme deals with the space containing hyperbolic and polynomial functions as spline basis. With the help of spline functions we derive consistency conditions and high order discretizations of the differential equation with the significant first order derivative. The error analysis of the new method is discussed briefly. The new method is analyzed for its efficiency using the physical problems. The order and accuracy of the proposed method have been analyzed in terms of maximum errors and root mean square errors.
Usando splines cúbicas na modelagem matemática da evolução populacional de Pirapora/MG
Directory of Open Access Journals (Sweden)
José Sérgio Domingues
2014-08-01
Full Text Available O objetivo desse trabalho é obter um modelo matemático para a evolução populacional da cidade de Pirapora/MG, baseando-se apenas nos dados de censos e contagens populacionais do Instituto Brasileiro de Geografia e Estatística (IBGE. Para isso, é utilizada a interpolação por splines cúbicas, pois as técnicas de interpolação linear e polinomial, e também o modelo logístico, não se ajustam bem a essa população. Os dados analisados não são equidistantes, então, utiliza-se como amostra anos separados com passo h de 10 anos. Os valores descartados inicialmente e as estimativas populacionais para esse município, descritos pela Fundação João Pinheiro, serviram para validação do modelo construído, e para a estimativa das diferenças percentuais de previsão, que não ultrapassaram os 2,21%. Ao se considerar que o padrão de evolução populacional de 2000 a 2010 se manterá até 2020, estima-se as populações da cidade de 2011 a 2020, cuja diferença percentual média foi de apenas 0,49%. Conclui-se que o modelo se ajusta muito bem aos dados, e que estimativas populacionais em qualquer ano de 1970 e 2020 são confiáveis. Além disso, o modelo permite a visualização prática de uma aplicação dessa técnica na modelagem populacional, e, portanto, também pode ser utilizada para fins didáticos.Palavras-chave: Splines cúbicas. Interpolação. Modelagem matemática. Evolução populacional. Pirapora.Using cubic splines on mathematical modeling of the population evolution of Pirapora/MGThe main objective of this paper is to obtain a mathematical model for the evolution of the population in Pirapora/MG, based only on data from censuses and population counts from Brazilian Institute of Geography and Statistics (IBGE. For this, the cubic spline interpolation is used because the technique of linear and polynomial interpolation, and also the logistic model do not fit well with this population. The analyzed data are not equidistant
Côrtes, A.M.A.
2015-02-20
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.
Cô rtes, A.M.A.; Coutinho, A.L.G.A.; Dalcin, L.; Calo, Victor M.
2015-01-01
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.
Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines
Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya
2017-11-01
Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
International Nuclear Information System (INIS)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Barton, Michael; Calo, Victor M.
2016-01-01
We introduce Gaussian quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. By definition, these spaces are of even degrees. The optimal quadrature rules we recently derived
Cortes, Adriano Mauricio; Dalcin, Lisandro; Sarmiento, Adel; Collier, N.; Calo, Victor M.
2016-01-01
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence
Topology optimization based on spline-based meshfree method using topological derivatives
International Nuclear Information System (INIS)
Hur, Junyoung; Youn, Sung-Kie; Kang, Pilseong
2017-01-01
Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.
Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.
Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco
2015-04-20
Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.
Finite nucleus Dirac mean field theory and random phase approximation using finite B splines
International Nuclear Information System (INIS)
McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)
1989-01-01
We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results
Selected Aspects of Wear Affecting Keyed Joints and Spline Connections During Operation of Aircrafts
Directory of Open Access Journals (Sweden)
Gębura Andrzej
2014-12-01
Full Text Available The paper deals with selected deficiencies of spline connections, such as angular or parallel misalignment (eccentricity and excessive play. It is emphasized how important these deficiencies are for smooth operation of the entire driving units. The aim of the study is to provide a kind of a reference list with such deficiencies with visual symptoms of wear, specification of mechanical measurements for mating surfaces, mathematical description of waveforms for dynamic variability of motion in such connections and visualizations of the connection behaviour acquired with the use of the FAM-C and FDM-A. Attention is paid to hazards to flight safety when excessively worn spline connections are operated for long periods of time
[Non-rigid medical image registration based on mutual information and thin-plate spline].
Cao, Guo-gang; Luo, Li-min
2009-01-01
To get precise and complete details, the contrast in different images is needed in medical diagnosis and computer assisted treatment. The image registration is the basis of contrast, but the regular rigid registration does not satisfy the clinic requirements. A non-rigid medical image registration method based on mutual information and thin-plate spline was present. Firstly, registering two images globally based on mutual information; secondly, dividing reference image and global-registered image into blocks and registering them; then getting the thin-plate spline transformation according to the shift of blocks' center; finally, applying the transformation to the global-registered image. The results show that the method is more precise than the global rigid registration based on mutual information and it reduces the complexity of getting control points and satisfy the clinic requirements better by getting control points of the thin-plate transformation automatically.
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...... efficiency of Polish dairy farms contributes to the insight into this dynamic process. Furthermore, we compare and evaluate the results of this spline-based semi-parametric stochastic frontier model with results of other semi-parametric stochastic frontier models and of traditional parametric stochastic...
Directory of Open Access Journals (Sweden)
Qin Guo-jie
2014-08-01
Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.
Institute of Scientific and Technical Information of China (English)
Chen Xi; Liao M ingfu; Li Quankun
2017-01-01
A rotor dynamic model is built up for investigating the effects of tightening torque on dynamic character-istics of low pressure rotors connected by a spline coupling .The experimental rotor system is established using a fluted disk and a speed sensor which is applied in an actual aero engine for speed measurement .Through simulating calculation and experiments ,the effects of tightening torque on the dynamic characteristics of the rotor system con-nected by a spline coupling including critical speeds ,vibration modes and unbalance responses are analyzed .The results show that when increasing the tightening torque ,the first two critical speeds and the amplitudes of unbal-ance response gradually increase in varying degrees while the vibration modes are essentially unchanged .In addi-tion ,changing axial and circumferential positions of the mass unbalance can lead to various amplitudes of unbalance response and even the rates of change .
Topology optimization based on spline-based meshfree method using topological derivatives
Energy Technology Data Exchange (ETDEWEB)
Hur, Junyoung; Youn, Sung-Kie [KAIST, Daejeon (Korea, Republic of); Kang, Pilseong [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)
2017-05-15
Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.
B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation
Directory of Open Access Journals (Sweden)
Frederic Precioso
2002-06-01
Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Cubic spline numerical solution of an ablation problem with convective backface cooling
Lin, S.; Wang, P.; Kahawita, R.
1984-08-01
An implicit numerical technique using cubic splines is presented for solving an ablation problem on a thin wall with convective cooling. A non-uniform computational mesh with 6 grid points has been used for the numerical integration. The method has been found to be computationally efficient, providing for the care under consideration of an overall error of about 1 percent. The results obtained indicate that the convective cooling is an important factor in reducing the ablation thickness.
Discrete quintic spline for boundary value problem in plate deflation theory
Wong, Patricia J. Y.
2017-07-01
We propose a numerical scheme for a fourth-order boundary value problem arising from plate deflation theory. The scheme involves a discrete quintic spline, and it is of order 4 if a parameter takes a specific value, else it is of order 2. We also present a well known numerical example to illustrate the efficiency of our method as well as to compare with other numerical methods proposed in the literature.
Nonlinear Multivariate Spline-Based Control Allocation for High-Performance Aircraft
Tol, H.J.; De Visser, C.C.; Van Kampen, E.; Chu, Q.P.
2014-01-01
High performance flight control systems based on the nonlinear dynamic inversion (NDI) principle require highly accurate models of aircraft aerodynamics. In general, the accuracy of the internal model determines to what degree the system nonlinearities can be canceled; the more accurate the model, the better the cancellation, and with that, the higher the performance of the controller. In this paper a new control system is presented that combines NDI with multivariate simplex spline based con...
Hay, A D; Singh, G D
2000-01-01
To analyze correction of mandibular deformity using an inverted L osteotomy and autogenous bone graft in patients exhibiting unilateral craniofacial microsomia (CFM), thin-plate spline analysis was undertaken. Preoperative, early postoperative, and approximately 3.5-year postoperative posteroanterior cephalographs of 15 children (age 10+/-3 years) with CFM were scanned, and eight homologous mandibular landmarks digitized. Average mandibular geometries, scaled to an equivalent size, were generated using Procrustes superimposition. Results indicated that the mean pre- and postoperative mandibular configurations differed statistically (PThin-plate spline analysis indicated that the total spline (Cartesian transformation grid) of the pre- to early postoperative configuration showed mandibular body elongation on the treated side and inferior symphyseal displacement. The affine component of the total spline revealed a clockwise rotation of the preoperative configuration, whereas the nonaffine component was responsible for ramus, body, and symphyseal displacements. The transformation grid for the early and late postoperative comparison showed bilateral ramus elongation. A superior symphyseal displacement contrasted with its earlier inferior displacement, the affine component had translocated the symphyseal landmarks towards the midline. The nonaffine component demonstrated bilateral ramus lengthening, and partial warps suggested that these elongations were slightly greater on the nontreated side. The affine component of the pre- and late postoperative comparison also demonstrated a clockwise rotation. The nonaffine component produced the bilateral ramus elongations-the nontreated side ramus lengthening slightly more than the treated side. It is concluded that an inverted L osteotomy improves mandibular morphology significantly in CFM patients and permits continued bilateral ramus growth. Copyright 2000 Wiley-Liss, Inc.
Thin-plate spline (TPS) graphical analysis of the mandible on cephalometric radiographs.
Chang, H P; Liu, P H; Chang, H F; Chang, C H
2002-03-01
We describe two cases of Class III malocclusion with and without orthodontic treatment. A thin-plate spline (TPS) analysis of lateral cephalometric radiographs was used to visualize transformations of the mandible. The actual sites of mandibular skeletal change are not detectable with conventional cephalometric analysis. These case analyses indicate that specific patterns of mandibular transformation are associated with Class III malocclusion with or without orthopaedic therapy, and visualization of these deformations is feasible using TPS graphical analysis.
Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence
Ait-Haddou, Rachid
2015-06-19
We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention of any numerical solver and the rule is optimal, that is, it requires minimal number of nodes. Numerical experiments validating the theoretical results and the error estimates of the quadrature rules are also presented.
Energy Technology Data Exchange (ETDEWEB)
Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)
2015-05-15
Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.
International Nuclear Information System (INIS)
Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y.; Pullepu, Babuji
2015-01-01
Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.
Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.
Oliveira, Francisco P M; Tavares, João Manuel R S
2013-03-01
This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.
Mathuriya, Amrita; Luo, Ye; Benali, Anouar; Shulenburger, Luke; Kim, Jeongnim
2016-01-01
B-spline based orbital representations are widely used in Quantum Monte Carlo (QMC) simulations of solids, historically taking as much as 50% of the total run time. Random accesses to a large four-dimensional array make it challenging to efficiently utilize caches and wide vector units of modern CPUs. We present node-level optimizations of B-spline evaluations on multi/many-core shared memory processors. To increase SIMD efficiency and bandwidth utilization, we first apply data layout transfo...
Ovsiannikov, Mikhail; Ovsiannikov, Sergei
2017-01-01
The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.
Wahba, G.
1982-01-01
Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.
DEFF Research Database (Denmark)
Troldborg, Niels; Sørensen, Niels N.; Réthoré, Pierre-Elouan
2015-01-01
This paper describes a consistent algorithm for eliminating the numerical wiggles appearing when solving the finite volume discretized Navier-Stokes equations with discrete body forces in a collocated grid arrangement. The proposed method is a modification of the Rhie-Chow algorithm where the for...
International Nuclear Information System (INIS)
Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdú, G.
2012-01-01
Highlights: ► The multidimensional P L approximation to the nuclear transport equation is reviewed. ► A nodal collocation method is developed for the spatial discretization of P L equations. ► Advantages of the method are lower dimension and good characterists of the associated algebraic eigenvalue problem. ► The P L nodal collocation method is implemented into the computer code SHNC. ► The SHNC code is verified with 2D and 3D benchmark eigenvalue problems from Takeda and Ikeda, giving satisfactory results. - Abstract: P L equations are classical approximations to the neutron transport equations, which are obtained expanding the angular neutron flux in terms of spherical harmonics. These approximations are useful to study the behavior of reactor cores with complex fuel assemblies, for the homogenization of nuclear cross-sections, etc., and most of these applications are in three-dimensional (3D) geometries. In this work, we review the multi-dimensional P L equations and describe a nodal collocation method for the spatial discretization of these equations for arbitrary odd order L, which is based on the expansion of the spatial dependence of the fields in terms of orthonormal Legendre polynomials. The performance of the nodal collocation method is studied by means of obtaining the k eff and the stationary power distribution of several 3D benchmark problems. The solutions are obtained are compared with a finite element method and a Monte Carlo method.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Curvelet-domain multiple matching method combined with cubic B-spline function
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
Motion characteristic between die and workpiece in spline rolling process with round dies
Directory of Open Access Journals (Sweden)
Da-Wei Zhang
2016-06-01
Full Text Available In the spline rolling process with round dies, additional kinematic compensation is an essential mechanism for improving the division of teeth and pitch accuracy as well as surface quality. The motion characteristic between the die and workpiece under varied center distance in the spline rolling process was investigated. Mathematical models of the instantaneous center of rotation, transmission ratio, and centrodes in the rolling process were established. The models were used to analyze the rolling process of the involute spline with circular dedendum, and the results indicated that (1 with the reduction in the center distance, the instantaneous center moves toward workpiece, and the transmission ratio increases at first and then decreases; (2 the variations in the instantaneous center and transmission ratio are discontinuous, presenting an interruption when the involute flank begins to be formed; (3 the change in transmission ratio at the forming stage of the workpiece with the involute flank can be negligible; and (4 the centrode of the workpiece is an Archimedes line whose polar radius reduces, and the centrode of the rolling die is similar to Archimedes line when the workpiece is with the involute flank.
Alkhamrah, B; Terada, K; Yamaki, M; Ali, I M; Hanada, K
2001-01-01
A longitudinal retrospective study using thin-plate spline analysis was used to investigate skeletal Class III etiology in Japanese female adolescents. Headfilms of 40 subjects were chosen from the archives of the Orthodontic department at Niigata University Dental Hospital, and were traced at IIIB and IVA Hellman dental ages. Twenty-eight homologous landmarks, representing hard and soft tissue, were digitized. These were used to reproduce a consensus for the profilogram, craniomaxillary complex, mandible, and soft tissue for each age and skeletal group. Generalized least-square analysis revealed a significant shape difference between age-matched groups (P spline and partial warps (PW)3 and 2 showed a maxillary retrusion at stage IIIB opposite an acute cranial base at stage IVA. Mandibular total spline and PW4, 5 showed changes affecting most landmarks and their spatial interrelationship, especially a stretch along the articulare-pogonion axis. In soft tissue analysis, PW8 showed large and local changes which paralleled the underlying hard tissue components. Allometry of the mandible and anisotropy of the cranial base, the maxilla, and the mandible asserted the complexity of craniofacial growth and the difficulty of predicting its outcome.
Landmark-based elastic registration using approximating thin-plate splines.
Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H
2001-06-01
We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.
Study on signal processing in Eddy current testing for defects in spline gear
International Nuclear Information System (INIS)
Lee, Jae Ho; Park, Tae Sug; Park, Ik Keun
2016-01-01
Eddy current testing (ECT) is commonly applied for the inspection of automated production lines of metallic products, because it has a high inspection speed and a reasonable price. When ECT is applied for the inspection of a metallic object having an uneven target surface, such as the spline gear of a spline shaft, it is difficult to distinguish between the original signal obtained from the sensor and the signal generated by a defect because of the relatively large surface signals having similar frequency distributions. To facilitate the detection of defect signals from the spline gear, implementation of high-order filters is essential, so that the fault signals can be distinguished from the surrounding noise signals, and simultaneously, the pass-band of the filter can be adjusted according to the status of each production line and the object to be inspected. We will examine the infinite impulse filters (IIR filters) available for implementing an advanced filter for ECT, and attempt to detect the flaw signals through optimization of system design parameters for detecting the signals at the system level
On developing B-spline registration algorithms for multi-core processors
International Nuclear Information System (INIS)
Shackleford, J A; Kandasamy, N; Sharp, G C
2010-01-01
Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.
Stable isogeometric analysis of trimmed geometries
Marussig, Benjamin; Zechner, Jürgen; Beer, Gernot; Fries, Thomas-Peter
2017-04-01
We explore extended B-splines as a stable basis for isogeometric analysis with trimmed parameter spaces. The stabilization is accomplished by an appropriate substitution of B-splines that may lead to ill-conditioned system matrices. The construction for non-uniform knot vectors is presented. The properties of extended B-splines are examined in the context of interpolation, potential, and linear elasticity problems and excellent results are attained. The analysis is performed by an isogeometric boundary element formulation using collocation. It is argued that extended B-splines provide a flexible and simple stabilization scheme which ideally suits the isogeometric paradigm.
Lux, C J; Rübel, J; Starke, J; Conradt, C; Stellzig, P A; Komposch, P G
2001-04-01
The aim of the present longitudinal cephalometric study was to evaluate the dentofacial shape changes induced by activator treatment between 9.5 and 11.5 years in male Class II patients. For a rigorous morphometric analysis, a thin-plate spline analysis was performed to assess and visualize dental and skeletal craniofacial changes. Twenty male patients with a skeletal Class II malrelationship and increased overjet who had been treated at the University of Heidelberg with a modified Andresen-Häupl-type activator were compared with a control group of 15 untreated male subjects of the Belfast Growth Study. The shape changes for each group were visualized on thin-plate splines with one spline comprising all 13 landmarks to show all the craniofacial shape changes, including skeletal and dento-alveolar reactions, and a second spline based on 7 landmarks to visualize only the skeletal changes. In the activator group, the grid deformation of the total spline pointed to a strong activator-induced reduction of the overjet that was caused both by a tipping of the incisors and by a moderation of sagittal discrepancies, particularly a slight advancement of the mandible. In contrast with this, in the control group, only slight localized shape changes could be detected. Both in the 7- and 13-landmark configurations, the shape changes between the groups differed significantly at P thin-plate spline analysis turned out to be a useful morphometric supplement to conventional cephalometrics because the complex patterns of shape change could be suggestively visualized.
International Nuclear Information System (INIS)
Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.
2010-01-01
Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.
Directory of Open Access Journals (Sweden)
Timothy J. Dickey
2008-03-01
Full Text Available The Functional Requirements for Bibliographic Records (FRBR’s hierarchical system defines families of bibliographic relationship between records and collocates them better than most extant bibliographic systems. Certain library materials (especially audio-visual formats pose notable challenges to search and retrieval; the first benefits of a FRBRized system would be felt in music libraries, but research already has proven its advantages for fine arts, theology, and literature—the bulk of the non-science, technology, and mathematics collections. This report will summarize the benefits of FRBR to nextgeneration library catalogs and OPACs, and will review the handful of ILS and catalog systems currently operating with its theoretical structure.
DEFF Research Database (Denmark)
Kolmogorov, Dmitry
turbine computations, collocated grid-based SIMPLE-like algorithms are developed for computations on block-structured grids with nonconformal interfaces. A technique to enhance both the convergence speed and the solution accuracy of the SIMPLE-like algorithms is presented. The erroneous behavior, which...... versions of the SIMPLE algorithm. The new technique is implemented in an existing conservative 2nd order finite-volume scheme flow solver (EllipSys), which is extended to cope with grids with nonconformal interfaces. The behavior of the discrete Navier-Stokes equations is discussed in detail...... Block LU relaxation scheme is shown to possess several optimal conditions, which enables to preserve high efficiency of the multigrid solver on both conformal and nonconformal grids. The developments are done using a parallel MPI algorithm, which can handle multiple numbers of interfaces with multiple...
An embedded formula of the Chebyshev collocation method for stiff problems
Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu
2017-12-01
In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.
Directory of Open Access Journals (Sweden)
S. S. Motsa
2014-01-01
Full Text Available This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs. The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals
Deimert, C.; Potter, M. E.; Okoniewski, M.
2016-12-01
The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
Between initial familiarity and future use – a case of Collocated Collaborative Writing
DEFF Research Database (Denmark)
Bødker, Susanne; Polli, Anna Maria
2014-01-01
these with the three above forms of practice. The initial familiarity leads to two different early practices that get in the way of each other, and the collaborative writing idea. They point instead towards a discursive sharing of individual feelings, a different kind of past experiences than anticipated in design.......This paper reports on a design experiment in an art gallery, where we explored visitor practices of commenting on art, and how they were shaped in interaction with a newly designed collocated, collaborative writing technology. In particular we investigate what potentials previous practices carry...... with them that may affect early use and further development of use. We base our analyses on interviews in the art gallery and on socio-cultural theories of artefactmediated learning and collaboration. The analyses help identify three forms of collaborative writing, which are placed in the space between...
A nodal collocation approximation for the multi-dimensional PL equations - 2D applications
International Nuclear Information System (INIS)
Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdu, G.
2008-01-01
A classical approach to solve the neutron transport equation is to apply the spherical harmonics method obtaining a finite approximation known as the P L equations. In this work, the derivation of the P L equations for multi-dimensional geometries is reviewed and a nodal collocation method is developed to discretize these equations on a rectangular mesh based on the expansion of the neutronic fluxes in terms of orthogonal Legendre polynomials. The performance of the method and the dominant transport Lambda Modes are obtained for a homogeneous 2D problem, a heterogeneous 2D anisotropic scattering problem, a heterogeneous 2D problem and a benchmark problem corresponding to a MOX fuel reactor core
Adaptive collocation method for simultaneous heat and mass diffusion with phase change
International Nuclear Information System (INIS)
Chawla, T.C.; Leaf, G.; Minkowycz, W.J.; Pedersen, D.R.; Shouman, A.R.
1983-01-01
The present study is carried out to determine melting rates of a lead slab of various thicknesses by contact with sodium coolant and to evaluate the extent of penetration and the mixing rates of molten lead into liquid sodium by molecular diffusion alone. The study shows that these two calculations cannot be performed simultaneously without the use of adaptive coordinates which cause considerable stretching of the physical coordinates for mass diffusion. Because of the large difference in densities of these two liquid metals, the traditional constant density approximation for the calculation of mass diffusion cannot be used for studying their interdiffusion. The use of orthogonal collocation method along with adaptive coordinates produces extremely accurate results which are ascertained by comparing with the existing analytical solutions for concentration distribution for the case of constant density approximation and for melting rates for the case of infinite lead slab
International Nuclear Information System (INIS)
Fortini, Maria A.; Stamoulis, Michel N.; Ferreira, Angela F.M.; Pereira, Claubia; Costa, Antonella L.; Silva, Clarysson A.M.
2008-01-01
In this work, an analytical model for the determination of the temperature distribution in cylindrical heater components with characteristics of nuclear fuel rods, is presented. The heat conductor is characterized by an arbitrary number of solid walls and different types of materials, whose thermal properties are taken as function of temperature. The heat conduction fundamental equation is solved numerically with the method of weighted residuals (MWR) using a technique of orthogonal collocation. The results obtained with the proposed method are compared with the experimental data from tests performed in the TRIGA IPR-R1 research reactor localized at the CDTN/CNEN (Centro de Desenvolvimento da Tecnologia Nuclear/Comissao Nacional de Energia Nuclear) at Belo Horizonte in Brazil
International Nuclear Information System (INIS)
Kim, Sang-Myeong; Kim, Heungseob; Boo, Kwangsuck; Brennan, Michael J
2013-01-01
This paper describes an experimental study into the vibration control of a servo system comprising a servo motor and a flexible manipulator. Two modes of the system are controlled by using the servo motor and an accelerometer attached to the tip of the flexible manipulator. The control system is thus non-collocated. It consists of two electrical dynamic absorbers, each of which consists of a modal filter and, in case of an out-of-phase mode, a phase inverter. The experimental results show that each absorber acts as a mechanical dynamic vibration absorber attached to each mode and significantly reduces the settling time for the system response to a step input. (technical note)
A fast collocation method for a variable-coefficient nonlocal diffusion model
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
Energy Technology Data Exchange (ETDEWEB)
Alwan, Aravind; Aluru, N.R.
2013-12-15
This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.
International Nuclear Information System (INIS)
Alwan, Aravind; Aluru, N.R.
2013-01-01
This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J
2017-12-01
Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
International Nuclear Information System (INIS)
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
An Analysis of Peak Wind Speed Data from Collocated Mechanical and Ultrasonic Anemometers
Short, David A.; Wells, Leonard; Merceret, Francis J.; Roeder, William P.
2007-01-01
This study compared peak wind speeds reported by mechanical and ultrasonic anemometers at Cape Canaveral Air Force Station and Kennedy Space Center (CCAFS/KSC) on the east central coast of Florida and Vandenberg Air Force Base (VAFB) on the central coast of California. Launch Weather Officers, forecasters, and Range Safety analysts need to understand the performance of wind sensors at CCAFS/KSC and VAFB for weather warnings, watches, advisories, special ground processing operations, launch pad exposure forecasts, user Launch Commit Criteria (LCC) forecasts and evaluations, and toxic dispersion support. The legacy CCAFS/KSC and VAFB weather tower wind instruments are being changed from propeller-and-vane (CCAFS/KSC) and cup-and-vane (VAFB) sensors to ultrasonic sensors under the Range Standardization and Automation (RSA) program. Mechanical and ultrasonic wind measuring techniques are known to cause differences in the statistics of peak wind speed as shown in previous studies. The 45th Weather Squadron (45 WS) and the 30th Weather Squadron (30 WS) requested the Applied Meteorology Unit (AMU) to compare data between the RSA ultrasonic and legacy mechanical sensors to determine if there are significant differences. Note that the instruments were sited outdoors under naturally varying conditions and that this comparison was not designed to verify either technology. Approximately 3 weeks of mechanical and ultrasonic wind data from each range from May and June 2005 were used in this study. The CCAFS/KSC data spanned the full diurnal cycle, while the VAFB data were confined to 1000-1600 local time. The sample of 1-minute data from numerous levels on five different towers on each range totaled more than 500,000 minutes of data (482,979 minutes of data after quality control). The ten towers were instrumented at several levels, ranging from 12 ft to 492 ft above ground level. The ultrasonic sensors were collocated at the same vertical levels as the mechanical sensors and
Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models
Directory of Open Access Journals (Sweden)
Giao N. Pham
2018-02-01
Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.
Data assimilation using Bayesian filters and B-spline geological models
Duan, Lian
2011-04-01
This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.
Thin-plate spline analysis of the cranial base in subjects with Class III malocclusion.
Singh, G D; McNamara, J A; Lozanoff, S
1997-08-01
The role of the cranial base in the emergence of Class III malocclusion is not fully understood. This study determines deformations that contribute to a Class III cranial base morphology, employing thin-plate spline analysis on lateral cephalographs. A total of 73 children of European-American descent aged between 5 and 11 years of age with Class III malocclusion were compared with an equivalent group of subjects with a normal, untreated, Class I molar occlusion. The cephalographs were traced, checked and subdivided into seven age- and sex-matched groups. Thirteen points on the cranial base were identified and digitized. The datasets were scaled to an equivalent size, and statistical analysis indicated significant differences between average Class I and Class III cranial base morphologies for each group. Thin-plate spline analysis indicated that both affine (uniform) and non-affine transformations contribute toward the total spline for each average cranial base morphology at each age group analysed. For non-affine transformations, Partial warps 10, 8 and 7 had high magnitudes, indicating large-scale deformations affecting Bolton point, basion, pterygo-maxillare, Ricketts' point and articulare. In contrast, high eigenvalues associated with Partial warps 1-3, indicating localized shape changes, were found at tuberculum sellae, sella, and the frontonasomaxillary suture. It is concluded that large spatial-scale deformations affect the occipital complex of the cranial base and sphenoidal region, in combination with localized distortions at the frontonasal suture. These deformations may contribute to reduced orthocephalization or deficient flattening of the cranial base antero-posteriorly that, in turn, leads to the formation of a Class III malocclusion.
Data assimilation using Bayesian filters and B-spline geological models
International Nuclear Information System (INIS)
Duan Lian; Farmer, Chris; Hoteit, Ibrahim; Luo Xiaodong; Moroz, Irene
2011-01-01
This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.
Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L
1997-04-01
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
A volume of fluid method based on multidimensional advection and spline interface reconstruction
International Nuclear Information System (INIS)
Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.
2004-01-01
A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps
Chen, T; Besio, W; Dai, W
2009-01-01
A comparison of the performance of the tripolar and bipolar concentric as well as spline Laplacian electrocardiograms (LECGs) and body surface Laplacian mappings (BSLMs) for localizing and imaging the cardiac electrical activation has been investigated based on computer simulation. In the simulation a simplified eccentric heart-torso sphere-cylinder homogeneous volume conductor model were developed. Multiple dipoles with different orientations were used to simulate the underlying cardiac electrical activities. Results show that the tripolar concentric ring electrodes produce the most accurate LECG and BSLM estimation among the three estimators with the best performance in spatial resolution.
Gaussian quadrature rules for C 1 quintic splines with uniform knot vectors
Barton, Michael; Ait-Haddou, Rachid; Calo, Victor Manuel
2017-01-01
We provide explicit quadrature rules for spaces of C1C1 quintic splines with uniform knot sequences over finite domains. The quadrature nodes and weights are derived via an explicit recursion that avoids numerical solvers. Each rule is optimal, that is, requires the minimal number of nodes, for a given function space. For each of nn subintervals, generically, only two nodes are required which reduces the evaluation cost by 2/32/3 when compared to the classical Gaussian quadrature for polynomials over each knot span. Numerical experiments show fast convergence, as nn grows, to the “two-third” quadrature rule of Hughes et al. (2010) for infinite domains.
Gaussian quadrature rules for C 1 quintic splines with uniform knot vectors
Bartoň, Michael
2017-03-21
We provide explicit quadrature rules for spaces of C1C1 quintic splines with uniform knot sequences over finite domains. The quadrature nodes and weights are derived via an explicit recursion that avoids numerical solvers. Each rule is optimal, that is, requires the minimal number of nodes, for a given function space. For each of nn subintervals, generically, only two nodes are required which reduces the evaluation cost by 2/32/3 when compared to the classical Gaussian quadrature for polynomials over each knot span. Numerical experiments show fast convergence, as nn grows, to the “two-third” quadrature rule of Hughes et al. (2010) for infinite domains.
Pseudo-cubic thin-plate type Spline method for analyzing experimental data
Energy Technology Data Exchange (ETDEWEB)
Crecy, F de
1994-12-31
A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs.
Registration of segmented histological images using thin plate splines and belief propagation
Kybic, Jan
2014-03-01
We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.
Pseudo-cubic thin-plate type Spline method for analyzing experimental data
International Nuclear Information System (INIS)
Crecy, F. de.
1993-01-01
A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs
Tikhonov regularization method for the numerical inversion of Mellin transforms using splines
International Nuclear Information System (INIS)
Iqbal, M.
2005-01-01
Mellin transform is an ill-posed problem. These problems arise in many branches of science and engineering. In the typical situation one is interested in recovering the original function, given a finite number of noisy measurements of data. In this paper, we shall convert Mellin transform to Laplace transform and then an integral equation of the first kind of convolution type. We solve the integral equation using Tikhonov regularization with splines as basis function. The method is applied to various test examples in the literature and results are shown in the table
The high-level error bound for shifted surface spline interpolation
Luh, Lin-Tian
2006-01-01
Radial function interpolation of scattered data is a frequently used method for multivariate data fitting. One of the most frequently used radial functions is called shifted surface spline, introduced by Dyn, Levin and Rippa in \\cite{Dy1} for $R^{2}$. Then it's extended to $R^{n}$ for $n\\geq 1$. Many articles have studied its properties, as can be seen in \\cite{Bu,Du,Dy2,Po,Ri,Yo1,Yo2,Yo3,Yo4}. When dealing with this function, the most commonly used error bounds are the one raised by Wu and S...
A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation
International Nuclear Information System (INIS)
Gao Yuan; Ma Pengcheng
2011-01-01
Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.
Complex wavenumber Fourier analysis of the B-spline based finite element method
Czech Academy of Sciences Publication Activity Database
Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav
2014-01-01
Roč. 51, č. 2 (2014), s. 348-359 ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479
Wu, J. C.; Tang, H. W.; Chen, Y. Q.; Li, Y. X.
2006-07-01
In this paper, the velocities of 154 stations obtained in 2001 and 2003 GPS survey campaigns are applied to formulate a continuous velocity field by the least-squares collocation method. The strain rate field obtained by the least-squares collocation method shows more clear deformation patterns than that of the conventional discrete triangle method. The significant deformation zones obtained are mainly located in three places, to the north of Tangshan, between Tianjing and Shijiazhuang, and to the north of Datong, which agree with the places of the Holocene active deformation zones obtained by geological investigations. The maximum shear strain rate is located at latitude 38.6°N and longitude 116.8°E, with a magnitude of 0.13 ppm/a. The strain rate field obtained can be used for earthquake prediction research in the North China Basin.
International Nuclear Information System (INIS)
Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong
2014-01-01
A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations. (paper)
Ocko, Ilissa B.; Ginoux, Paul A.
2017-04-01
Anthropogenic aerosols are a key factor governing Earth's climate and play a central role in human-caused climate change. However, because of aerosols' complex physical, optical, and dynamical properties, aerosols are one of the most uncertain aspects of climate modeling. Fortunately, aerosol measurement networks over the past few decades have led to the establishment of long-term observations for numerous locations worldwide. Further, the availability of datasets from several different measurement techniques (such as ground-based and satellite instruments) can help scientists increasingly improve modeling efforts. This study explores the value of evaluating several model-simulated aerosol properties with data from spatially collocated instruments. We compare aerosol optical depth (AOD; total, scattering, and absorption), single-scattering albedo (SSA), Ångström exponent (α), and extinction vertical profiles in two prominent global climate models (Geophysical Fluid Dynamics Laboratory, GFDL, CM2.1 and CM3) to seasonal observations from collocated instruments (AErosol RObotic NETwork, AERONET, and Cloud-Aerosol Lidar with Orthogonal Polarization, CALIOP) at seven polluted and biomass burning regions worldwide. We find that a multi-parameter evaluation provides key insights on model biases, data from collocated instruments can reveal underlying aerosol-governing physics, column properties wash out important vertical distinctions, and improved models does not mean all aspects are improved. We conclude that it is important to make use of all available data (parameters and instruments) when evaluating aerosol properties derived by models.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Non-stationary covariance function modelling in 2D least-squares collocation
Darbeheshti, N.; Featherstone, W. E.
2009-06-01
Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.
A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems
Directory of Open Access Journals (Sweden)
Qingzhou Mao
2015-06-01
Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.
A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation
Rieser, Daniel; Mayer-Guerr, Torsten
2014-05-01
The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.
International Nuclear Information System (INIS)
Joshi, J.R.
2000-01-01
The Process, Purification and Stack Buildings are collocated safety related concrete shear wall structures with plan dimensions in excess of 100 feet. An important aspect of their seismic analysis was the determination of structure soil structure interaction (SSSI) effects, if any. The SSSI analysis of the Process Building, with one other building at a time, was performed with the SASSI computer code for up to 50 frequencies. Each combined model had about 1500 interaction nodes. Results of the SSSI analysis were compared with those from soil structure interaction (SSI) analysis of the individual buildings, done with ABAQUS and SASSI codes, for three parameters: peak accelerations, seismic forces and the in-structure floor response spectra (FRS). The results may be of wider interest due to the model size and the potential applicability to other deep soil layered sites. Results obtained from the ABAQUS analysis were consistently higher, as expected, than those from the SSI and SSSI analyses using the SASSI. The SSSI effect between the Process and Purification Buildings was not significant. The Process and Stack Building results demonstrated that under certain conditions a massive structure can have an observable effect on the seismic response of a smaller and less stiff structure
Trajectory Planning of Satellite Formation Flying using Nonlinear Programming and Collocation
Directory of Open Access Journals (Sweden)
Hyung-Chu Lim
2008-12-01
Full Text Available Recently, satellite formation flying has been a topic of significant research interest in aerospace society because it provides potential benefits compared to a large spacecraft. Some techniques have been proposed to design optimal formation trajectories minimizing fuel consumption in the process of formation configuration or reconfiguration. In this study, a method is introduced to build fuel-optimal trajectories minimizing a cost function that combines the total fuel consumption of all satellites and assignment of fuel consumption rate for each satellite. This approach is based on collocation and nonlinear programming to solve constraints for collision avoidance and the final configuration. New constraints of nonlinear equality or inequality are derived for final configuration, and nonlinear inequality constraints are established for collision avoidance. The final configuration constraints are that three or more satellites should form a projected circular orbit and make an equilateral polygon in the horizontal plane. Example scenarios, including these constraints and the cost function, are simulated by the method to generate optimal trajectories for the formation configuration and reconfiguration of multiple satellites.
Energy Technology Data Exchange (ETDEWEB)
Radecki, Peter P [Los Alamos National Laboratory; Farinholt, Kevin M [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Bement, Matthew T [Los Alamos National Laboratory
2008-01-01
The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.
International Nuclear Information System (INIS)
Ozer, Ekin; Feng, Dongming; Feng, Maria Q
2017-01-01
State-of-the-art multisensory technologies and heterogeneous sensor networks propose a wide range of response measurement opportunities for structural health monitoring (SHM). Measuring and fusing different physical quantities in terms of structural vibrations can provide alternative acquisition methods and improve the quality of the modal testing results. In this study, a recently introduced SHM concept, SHM with smartphones, is focused to utilize multisensory smartphone features for a hybridized structural vibration response measurement framework. Based on vibration testing of a small-scale multistory laboratory model, displacement and acceleration responses are monitored using two different smartphone sensors, an embedded camera and accelerometer, respectively. Double-integration or differentiation among different measurement types is performed to combine multisensory measurements on a comparative basis. In addition, distributed sensor signals from collocated devices are processed for modal identification, and performance of smartphone-based sensing platforms are tested under different configuration scenarios and heterogeneity levels. The results of these tests show a novel and successful implementation of a hybrid motion sensing platform through multiple sensor type and device integration. Despite the heterogeneity of motion data obtained from different smartphone devices and technologies, it is shown that multisensory response measurements can be blended for experimental modal analysis. Getting benefit from the accessibility of smartphone technology, similar smartphone-based dynamic testing methodologies can provide innovative SHM solutions with mobile, programmable, and cost-free interfaces. (paper)
Ozer, Ekin; Feng, Dongming; Feng, Maria Q.
2017-10-01
State-of-the-art multisensory technologies and heterogeneous sensor networks propose a wide range of response measurement opportunities for structural health monitoring (SHM). Measuring and fusing different physical quantities in terms of structural vibrations can provide alternative acquisition methods and improve the quality of the modal testing results. In this study, a recently introduced SHM concept, SHM with smartphones, is focused to utilize multisensory smartphone features for a hybridized structural vibration response measurement framework. Based on vibration testing of a small-scale multistory laboratory model, displacement and acceleration responses are monitored using two different smartphone sensors, an embedded camera and accelerometer, respectively. Double-integration or differentiation among different measurement types is performed to combine multisensory measurements on a comparative basis. In addition, distributed sensor signals from collocated devices are processed for modal identification, and performance of smartphone-based sensing platforms are tested under different configuration scenarios and heterogeneity levels. The results of these tests show a novel and successful implementation of a hybrid motion sensing platform through multiple sensor type and device integration. Despite the heterogeneity of motion data obtained from different smartphone devices and technologies, it is shown that multisensory response measurements can be blended for experimental modal analysis. Getting benefit from the accessibility of smartphone technology, similar smartphone-based dynamic testing methodologies can provide innovative SHM solutions with mobile, programmable, and cost-free interfaces.