WorldWideScience

Sample records for spline collocation approach

  1. A fourth order spline collocation approach for a business cycle model

    Science.gov (United States)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  2. Schwarz and multilevel methods for quadratic spline collocation

    Energy Technology Data Exchange (ETDEWEB)

    Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  3. An efficient approach to numerical study of the coupled-BBM system with B-spline collocation method

    Directory of Open Access Journals (Sweden)

    khalid ali

    2016-11-01

    Full Text Available In the present paper, a numerical method is proposed for the numerical solution of a coupled-BBM system with appropriate initial and boundary conditions by using collocation method with cubic trigonometric B-spline on the uniform mesh points. The method is shown to be unconditionally stable using von-Neumann technique. To test accuracy the error norms2L, ?L are computed. Furthermore, interaction of two and three solitary waves are used to discuss the effect of the behavior of the solitary waves after the interaction. These results show that the technique introduced here is easy to apply. We make linearization for the nonlinear term.

  4. B-spline Collocation with Domain Decomposition Method

    International Nuclear Information System (INIS)

    Hidayat, M I P; Parman, S; Ariwahjoedi, B

    2013-01-01

    A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.

  5. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  6. A fractional spline collocation-Galerkin method for the time-fractional diffusion equation

    Directory of Open Access Journals (Sweden)

    Pezza L.

    2018-03-01

    Full Text Available The aim of this paper is to numerically solve a diffusion differential problem having time derivative of fractional order. To this end we propose a collocation-Galerkin method that uses the fractional splines as approximating functions. The main advantage is in that the derivatives of integer and fractional order of the fractional splines can be expressed in a closed form that involves just the generalized finite difference operator. This allows us to construct an accurate and efficient numerical method. Several numerical tests showing the effectiveness of the proposed method are presented.

  7. Spline Collocation Method for Nonlinear Multi-Term Fractional Differential Equation

    OpenAIRE

    Choe, Hui-Chol; Kang, Yong-Suk

    2013-01-01

    We study an approximation method to solve nonlinear multi-term fractional differential equations with initial conditions or boundary conditions. First, we transform the nonlinear multi-term fractional differential equations with initial conditions and boundary conditions to nonlinear fractional integral equations and consider the relations between them. We present a Spline Collocation Method and prove the existence, uniqueness and convergence of approximate solution as well as error estimatio...

  8. Hybrid B-Spline Collocation Method for Solving the Generalized Burgers-Fisher and Burgers-Huxley Equations

    Directory of Open Access Journals (Sweden)

    Imtiaz Wasim

    2018-01-01

    Full Text Available In this study, we introduce a new numerical technique for solving nonlinear generalized Burgers-Fisher and Burgers-Huxley equations using hybrid B-spline collocation method. This technique is based on usual finite difference scheme and Crank-Nicolson method which are used to discretize the time derivative and spatial derivatives, respectively. Furthermore, hybrid B-spline function is utilized as interpolating functions in spatial dimension. The scheme is verified unconditionally stable using the Von Neumann (Fourier method. Several test problems are considered to check the accuracy of the proposed scheme. The numerical results are in good agreement with known exact solutions and the existing schemes in literature.

  9. On the equivalence of spherical splines with least-squares collocation and Stokes's formula for regional geoid computation

    Science.gov (United States)

    Ophaug, Vegard; Gerlach, Christian

    2017-11-01

    This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.

  10. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  11. Bessel collocation approach for approximate solutions of Hantavirus infection model

    Directory of Open Access Journals (Sweden)

    Suayip Yuzbasi

    2017-11-01

    Full Text Available In this study, a collocation method is introduced to find the approximate solutions of Hantavirus infection model which is a system of nonlinear ordinary differential equations. The method is based on the Bessel functions of the first kind, matrix operations and collocation points. This method converts Hantavirus infection model into a matrix equation in terms of the Bessel functions of first kind, matrix operations and collocation points. The matrix equation corresponds to a system of nonlinear equations with the unknown Bessel coefficients. The reliability and efficiency of the suggested scheme are demonstrated by numerical applications and all numerical calculations have been done by using a program written in Maple.

  12. English collocations: A novel approach to teaching the language's last bastion

    Directory of Open Access Journals (Sweden)

    Rafe S. Zaabalawi

    2017-01-01

    Full Text Available Collocations are a class of idiomatic expressions comprised of a sequence of words which, for mostly arbitrary reasons, occur together in a prescribed order. Collocations are not necessarily grammatical and/or cannot be generated through knowledge of rules or formulae. Therefore, they are often not easily mastered by EFL learners and typically only dealt with during the latter phase of second language apprenticeship. Literature has mostly examined the phenomenon of collocations from one of two perspectives. First, there are studies focusing on error analysis and contingent pedagogical advice. Second, there is research concerned with theory development; a genre associated with a specific methodological limitations. This study reports on data pertaining to a novel approach to learning collocations; one based on a learner's incidental discovery of such structures in written texts. Our research question is: will students who have been introduced to and practiced specific collocations in reading texts be inclined to naturally use such exemplars appropriately in novel/unfamiliar subsequent contexts? Findings have implications for EFL teachers and those concerned with curriculum development.

  13. A modified linear algebraic approach to electron scattering using cubic splines

    International Nuclear Information System (INIS)

    Kinney, R.A.

    1986-01-01

    A modified linear algebraic approach to the solution of the Schrodiner equation for low-energy electron scattering is presented. The method uses a piecewise cubic-spline approximation of the wavefunction. Results in the static-potential and the static-exchange approximations for e - +H s-wave scattering are compared with unmodified linear algebraic and variational linear algebraic methods. (author)

  14. Proxemic Mobile Collocated Interactions

    DEFF Research Database (Denmark)

    Porcheron, Martin; Lucero, Andrés; Quigley, Aaron

    2016-01-01

    and their digital devices (i.e. the proxemic relationships). Building on the ideas of proxemic interactions, this workshop is motivated by the concept of ‘proxemic mobile collocated interactions’, to harness new or existing technologies to create engaging and interactionally relevant experiences. Such approaches......Recent research on mobile collocated interactions has been looking at situations in which collocated users engage in collaborative activities using their mobile devices. However, existing practices fail to fully account for the culturally-dependent spatial relationships between people...... in exploring proxemics and mobile collocated interactions....

  15. The Effects of Input Flood and Consciousness-Raising Approach on Collocation Knowledge Development of Language Learners

    Directory of Open Access Journals (Sweden)

    Elaheh Hamed Mahvelati

    2012-11-01

    Full Text Available Many researchers stress the importance of lexical coherence and emphasize the need for teaching collocations at all levels of language proficiency. Thus, this study was conducted to measure the relative effectiveness of explicit (consciousness-raising approach versus implicit (input flood collocation instruction with regard to learners’ knowledge of both lexical and grammatical collocations. Ninety-five upper-intermediate learners, who were randomly assigned to the control and experimental groups, served as the participants of this study. While one of the experimental groups was provided with input flood treatment, the other group received explicit collocation instruction. In contrast, the participants in the control group did not receive any instruction on learning collocations. The results of the study, which were collected through pre-test, immediate post-test and delayed post-test, revealed that although both methods of teaching collocations proved effective, the explicit method of consciousness-raising approach was significantly superior to the implicit method of input flood treatment.

  16. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  17. A Fourier Collocation Approach for Transit-Time Ultrasonic Flowmeter Under Multi-Phase Flow Conditions

    DEFF Research Database (Denmark)

    Simurda, Matej; Lassen, Benny; Duggen, Lars

    2017-01-01

    A numerical model for a clamp-on transit-time ultrasonic flowmeter (TTUF) under multi-phase flow conditions is presented. The method solves equations of linear elasticity for isotropic heterogeneous materials with background flow where acoustic media are modeled by setting shear modulus to zero....... Spatial derivatives are calculated by a Fourier collocation method allowing the use of the fast Fourier transform (FFT) and time derivatives are approximated by a finite difference (FD) scheme. This approach is sometimes referred to as a pseudospectral time-domain method. Perfectly matched layers (PML......) are used to avoid wave-wrapping and staggered grids are implemented to improve stability and efficiency. The method is verified against exact analytical solutions and the effect of the time-staggering and associated lowest number of points per minimum wavelengths value is discussed. The method...

  18. Rectangular spectral collocation

    KAUST Repository

    Driscoll, Tobin A.; Hale, Nicholas

    2015-01-01

    Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon

  19. A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation

    Directory of Open Access Journals (Sweden)

    S. Battal Gazi Karakoç

    2016-02-01

    Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.  

  20. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    International Nuclear Information System (INIS)

    Ma Xiang; Zabaras, Nicholas

    2009-01-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media

  1. RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.

    Science.gov (United States)

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  2. A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation

    Science.gov (United States)

    Rieser, Daniel; Mayer-Guerr, Torsten

    2014-05-01

    The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.

  3. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    Science.gov (United States)

    Liu, L. H.; Tan, J. Y.

    2007-02-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.

  4. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    International Nuclear Information System (INIS)

    Liu, L.H.; Tan, J.Y.

    2007-01-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media

  5. Temporal gravity field modeling based on least square collocation with short-arc approach

    Science.gov (United States)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  6. Target Registration Error minimization involving deformable organs using elastic body splines and Particle Swarm Optimization approach.

    Science.gov (United States)

    Spinczyk, Dominik; Fabian, Sylwester

    2017-12-01

    In minimally invasive surgery one of the main challenges is the precise location of the target during the intervention. The aim of the study is to present usability of elastic body splines (EBS) to minimize TRE error. The method to find the desired EBS parameters values is presented with usage of Particle Swarm optimization approach. This ability of TRE minimization has been achieved for the respiratory phases corresponding to minimum FRE for abdominal (especially liver) surgery. The proposed methodology was verified during experiments conducted on 21 patients diagnosed with liver tumors. This method has been developed to perform operations in real-time on a standard workstation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Teaching collocations

    DEFF Research Database (Denmark)

    Revier, Robert Lee; Henriksen, Birgit

    2006-01-01

    Very little pedadagoy has been made available to teachers interested in teaching collocations in foreign and/or second language classroom. This paper aims to contribute to and promote efforts in developing L2-based pedagogy for the teaching of phraseology. To this end, it presents pedagogical...

  8. The basis spline method and associated techniques

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs

  9. Spline methods for conversation equations

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs

  10. Time-dependent B-spline R-matrix approach to double ionization of atoms by XUV laser pulses

    Energy Technology Data Exchange (ETDEWEB)

    Guan Xiaoxu; Zatsarinny, Oleg; Bartschat, Klaus [Department of Physics and Astronomy, Drake University, Des Moines, Iowa 50311 (United States); Noble, Clifford J [Computational Science and Engineering Department, Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Schneider, Barry I, E-mail: xiaoxu.guan@drake.ed, E-mail: klaus.bartschat@drake.ed, E-mail: bschneid@nsf.go [Physics Division, National Science Foundation, Arlington, Virgina 22230 (United States)

    2009-11-01

    We present an ab initio and non-perturbative time-dependent approach to the problem of double ionization of a general atom driven by intense XUV laser pulses. After using a highly flexible B-spline R-matrix method to generate field-free Hamiltonian and electric dipole matrices, the initial state is propagated in time using an efficient Arnoldi-Lanczos scheme. Example results for momentum and energy distributions of the two outgoing electrons in two-color pump-probe processes of He are presented.

  11. A time-dependent B-spline R-matrix approach to double ionization of atoms by XUV laser pulses

    Energy Technology Data Exchange (ETDEWEB)

    Guan Xiaoxu; Zatsarinny, O; Noble, C J; Bartschat, K [Department of Physics and Astronomy, Drake University, Des Moines, IA 50311 (United States); Schneider, B I [Physics Division, National Science Foundation, Arlington, Virgina 22230 (United States)], E-mail: xiaoxu.guan@drake.edu, E-mail: oleg.zatsarinny@drake.edu, E-mail: cjn@maxnet.co.nz, E-mail: klaus.bartschat@drake.edu, E-mail: bschneid@nsf.gov

    2009-07-14

    We present an ab initio and non-perturbative time-dependent approach to the problem of double ionization of a general atom driven by intense XUV laser pulses. After using a highly flexible B-spline R-matrix method to generate field-free Hamiltonian and electric dipole matrices, the initial state is propagated in time using an efficient Arnoldi-Lanczos scheme. Test calculations for double ionization of He by a single laser pulse yield good agreement with benchmark results obtained with other methods. The method is then applied to two-colour pump-probe processes, for which momentum and energy distributions of the two outgoing electrons are presented.

  12. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    Science.gov (United States)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates

  13. A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation

    International Nuclear Information System (INIS)

    Gao Yuan; Ma Pengcheng

    2011-01-01

    Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.

  14. Spline-procedures

    International Nuclear Information System (INIS)

    Schmidt, R.

    1976-12-01

    This report contains a short introduction to spline functions as well as a complete description of the spline procedures presently available in the HMI-library. These include polynomial splines (using either B-splines or one-sided basis representations) and natural splines, as well as their application to interpolation, quasiinterpolation, L 2 -, and Tchebycheff approximation. Special procedures are included for the case of cubic splines. Complete test examples with input and output are provided for each of the procedures. (orig.) [de

  15. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  16. Testing controlled productive knowledge of adverb-verb collocations ...

    African Journals Online (AJOL)

    A controlled productive test of adverb-verb collocations ..... The third approach to studying collocations, corpus analysis, ..... The collocation web model is thought to match Nation's (2001) psychological .... Theory, analysis, and applications. .... Canadian Modern ... Focus on vocabulary: Mastering the Academic Word List.

  17. B-spline solution of a singularly perturbed boundary value problem arising in biology

    International Nuclear Information System (INIS)

    Lin Bin; Li Kaitai; Cheng Zhengxing

    2009-01-01

    We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.

  18. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Science.gov (United States)

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  19. Rectangular spectral collocation

    KAUST Repository

    Driscoll, Tobin A.

    2015-02-06

    Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon resampling differentiated polynomials into a lower-degree subspace makes differentiation matrices, and operators built from them, rectangular without any row deletions. Then, boundary and interface conditions can be adjoined to yield a square system. The resulting method is both flexible and robust, and avoids ambiguities that arise when applying the classical row deletion method outside of two-point scalar boundary-value problems. The new method is the basis for ordinary differential equation solutions in Chebfun software, and is demonstrated for a variety of boundary-value, eigenvalue and time-dependent problems.

  20. Developing and Evaluating a Web-Based Collocation Retrieval Tool for EFL Students and Teachers

    Science.gov (United States)

    Chen, Hao-Jan Howard

    2011-01-01

    The development of adequate collocational knowledge is important for foreign language learners; nonetheless, learners often have difficulties in producing proper collocations in the target language. Among the various ways of learning collocations, the DDL (data-driven learning) approach encourages independent learning of collocations and allows…

  1. Lexical and Grammatical Collocations in Writing Production of EFL Learners

    Directory of Open Access Journals (Sweden)

    Maryam Bahardoust

    2012-05-01

    Full Text Available Lewis (1993 recognized significance of word combinations including collocations by presenting lexical approach. Because of the crucial role of collocation in vocabulary acquisition, this research set out to evaluate the rate of collocations in Iranian EFL learners' writing production across L1 and L2. In addition, L1 interference with L2 collocational use in the learner' writing samples was studied. To achieve this goal, 200 Persian EFL learners at BA level were selected. These participants were taking paragraph writing and essay writing courses in two successive semesters. As for the data analysis, mid-term, final exam, and also the assignments of L2 learners were evaluated. Because of the nominal nature of the data, chi-square test was utilized for data analysis. Then, the rate of lexical and grammatical collocations was calculated. Results showed that the lexical collocations outnumbered the grammatical collocations. Different categories of lexical collocations were also compared with regard to their frequencies in EFL writing production. The rate of the verb-noun and adjective-noun collocations appeared to be the highest and noun-verb collocations the lowest. The results also showed that L1 had both positive and negative effect on the occurrence of both grammatical and lexical collocations.

  2. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  3. Mobile Collocated Interactions

    DEFF Research Database (Denmark)

    Lucero, Andrés; Clawson, James; Lyons, Kent

    2015-01-01

    Mobile devices such as smartphones and tablets were originally conceived and have traditionally been utilized for individual use. Research on mobile collocated interactions has been looking at situations in which collocated users engage in collaborative activities using their mobile devices, thus...... going from personal/individual toward shared/multiuser experiences and interactions. However, computers are getting smaller, more powerful, and closer to our bodies. Therefore, mobile collocated interactions research, which originally looked at smartphones and tablets, will inevitably include ever......-smaller computers, ones that can be worn on our wrists or other parts of the body. The focus of this workshop is to bring together a community of researchers, designers and practitioners to explore the potential of extending mobile collocated interactions to the use of wearable devices....

  4. Slovene-English Contrastive Phraseology: Lexical Collocations

    Directory of Open Access Journals (Sweden)

    Primož Jurko

    2010-05-01

    Full Text Available Phraseology is seen as one of the key elements and arguably the most productive part of any language. %e paper is focused on collocations and separates them from other phraseological units, such as idioms or compounds. Highlighting the difference between a monolingual and a bilingual (i.e. contrastive approach to collocation, the article presents two distinct classes of collocations: grammatical and lexical. %e latter, treated contrastively, represent the focal point of the paper, since they are an unending source of translation errors to both students of translation and professional translators. %e author introduces a methodology of systematic classification of lexical collocations applied on the Slovene-English language pair and based on structural (lexical congruence and semantic (translational predictability criteria.

  5. Gamma Splines and Wavelets

    Directory of Open Access Journals (Sweden)

    Hannu Olkkonen

    2013-01-01

    Full Text Available In this work we introduce a new family of splines termed as gamma splines for continuous signal approximation and multiresolution analysis. The gamma splines are born by -times convolution of the exponential by itself. We study the properties of the discrete gamma splines in signal interpolation and approximation. We prove that the gamma splines obey the two-scale equation based on the polyphase decomposition. to introduce the shift invariant gamma spline wavelet transform for tree structured subscale analysis of asymmetric signal waveforms and for systems with asymmetric impulse response. Especially we consider the applications in biomedical signal analysis (EEG, ECG, and EMG. Finally, we discuss the suitability of the gamma spline signal processing in embedded VLSI environment.

  6. Fourier Collocation Approach With Mesh Refinement Method for Simulating Transit-Time Ultrasonic Flowmeters Under Multiphase Flow Conditions.

    Science.gov (United States)

    Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny

    2018-02-01

    A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.

  7. Geostationary satellites collocation

    CERN Document Server

    Li, Hengnian

    2014-01-01

    Geostationary Satellites Collocation aims to find solutions for deploying a safe and reliable collocation control. Focusing on the orbital perturbation analysis, the mathematical foundations for orbit and control of the geostationary satellite are summarized. The mathematical and physical principle of orbital maneuver and collocation strategies for multi geostationary satellites sharing with the same dead band is also stressed. Moreover, the book presents some applications using the above algorithms and mathematical models to help readers master the corrective method for planning station keeping maneuvers. Engineers and scientists in the fields of aerospace technology and space science can benefit from this book. Hengnian Li is the Deputy Director of State Key Laboratory of Astronautic Dynamics, China.

  8. On Collocations and Their Interaction with Parsing and Translation

    Directory of Open Access Journals (Sweden)

    Violeta Seretan

    2013-10-01

    Full Text Available We address the problem of automatically processing collocations—a subclass of multi-word expressions characterized by a high degree of morphosyntactic flexibility—in the context of two major applications, namely, syntactic parsing and machine translation. We show that parsing and collocation identification are processes that are interrelated and that benefit from each other, inasmuch as syntactic information is crucial for acquiring collocations from corpora and, vice versa, collocational information can be used to improve parsing performance. Similarly, we focus on the interrelation between collocations and machine translation, highlighting the use of translation information for multilingual collocation identification, as well as the use of collocational knowledge for improving translation. We give a panorama of the existing relevant work, and we parallel the literature surveys with our own experiments involving a symbolic parser and a rule-based translation system. The results show a significant improvement over approaches in which the corresponding tasks are decoupled.

  9. Interpolating cubic splines

    CERN Document Server

    Knott, Gary D

    2000-01-01

    A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi­ cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi­ nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...

  10. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  11. Collocations in Marine Engineering English

    Directory of Open Access Journals (Sweden)

    Mirjana Borucinsky

    2016-05-01

    Full Text Available Collocations are very frequent in the English language (Hill, 2000, and they are probably the most common and most representative of English multi-word expressions (Lewis, 2000. Furthermore, as a subset of formulaic sequences, collocations are considered to be a central aspect of communicative competence (Nation, 2001. Hence, the importance of teaching collocations in General English (GE as well as in English for Specific Purposes (ESP is undeniable. Understanding and determining the relevant collocations and their mastery are of “utmost importance to a ME instructor” (Cole et al., 2007, p. 137, and collocations are one of the most productive ways of enriching vocabulary and terminology in modern ME. Vişan & Georgescu (2011 have undertaken a relevant study on  collocations and “collocational competence” on board ships, including mostly nautical terminology. However, no substantial work on collocations in Marine Engineering English as a sub-register of ME has been carried out. Hence, this paper tries to determine the most important collocations in Marine Engineering English, based on a small corpus of collected e-mails. After determining the most relevant collocations, we suggest how to implement these in the language classroom and how to improve the collocational competence of marine engineering students.

  12. Designing interactively with elastic splines

    DEFF Research Database (Denmark)

    Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie

    2018-01-01

    We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....

  13. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.

    2010-01-01

    Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.

  14. Collocations and collocation types in ESP textbooks: Quantitative pedagogical analysis

    Directory of Open Access Journals (Sweden)

    Bogdanović Vesna Ž.

    2016-01-01

    Full Text Available The term collocation, even though it is rather common in the English language grammar, it is not a well known or commonly used term in the textbooks and scientific papers written in the Serbian language. Collocating is usually defined as a natural appearance of two (or more words, which are usually one next to another even though they can be separated in the text, while collocations are defined as words with natural semantic and/or syntactic relations being joined together in a sentence. Collocations are naturally used in all English written texts, including scientific texts and papers. Using two textbooks for English for Specific Purposes (ESP for intermediate students' courses, this paper presents the frequency of collocations and their typology. The paper tries to investigate the relationship between lexical and grammatical collocations written in the ESP texts and the reasons for their presence. There is an overview of the most used subtypes of lexical collocations as well. Furthermore, on applying the basic corpus analysis based on the quantitative analysis, the paper presents the number of open, restricted and bound collocations in ESP texts, trying to draw conclusions on their frequency and hence the modes for their learning. There is also a section related to the number and usage of scientific collocations, both common scientific and narrow-professional ones. The conclusion is that the number of present collocations in the selected two textbooks imposes a demand for further analysis of these lexical connections, as well as new modes for their teaching and presentations to the English learning students.

  15. Dryland Agrivoltaics: A novel approach to collocating food production and solar renewable energy to maximize food production, water savings, and energy generation

    Science.gov (United States)

    Barron-Gafford, G.; Escobedo, E. B.; Smith, J.; Raub, H.; Jimenez, J. R.; Sutter, L., Jr.; Barnett-Moreno, I.; Blackett, D. T.; Thompson, M. S.; Minor, R. L.; Pavao-Zuckerman, M.

    2017-12-01

    Conventional understanding of land use asserts an inherent "zero-sum-game" of competition between renewable energy and agricultural food production. This discourse is so fundamentally entrenched that it drives most current policy around conservation practices, land and water allotments for agriculture, and permitting for large-scale renewable energy installations. We are investigating a novel approach to solve a problem key to our environment and economy in drylands by creating a hybrid of collocated "green" agriculture and "grey" solar photovoltaic (PV) infrastructure to maximize agricultural production while improving renewable energy production. We are monitoring atmospheric microclimatic conditions, soil moisture, plant ecophysiological function, and biomass production within both this novel "agrivoltaics" ecosystem and in traditional PV installations and agricultural settings (control plot) to quantify tradeoffs associated with this approach. We have found that levels of soil moisture remained higher after each irrigation event within the soils under the agrivoltaics installation than the traditional agricultural setting due to the shading provided by the PV panels overhead. We initiated a drought treatment, which underscored the water-savings under the agrivoltaics installation and increased water use efficiency in this system. We hypothesized that we will see more temperature and drought stresses on photosynthetic capacity and water use efficiency in the control plants relative to the agrivoltaic installation, and we found that several food crops either experienced significantly more production within the agrivoltaics area, whereas others resulted in nearly equal production but at significant water savings. Combined with localized cooling of the PV panels resulting from the transpiration from the vegetative "understory", we are finding a win-win-win at the food-water-energy nexus. photo credit: Bob Demers/UANews

  16. Nonrigid registration of dynamic medical imaging data using nD + t B-splines and a groupwise optimization approach.

    Science.gov (United States)

    Metz, C T; Klein, S; Schaap, M; van Walsum, T; Niessen, W J

    2011-04-01

    A registration method for motion estimation in dynamic medical imaging data is proposed. Registration is performed directly on the dynamic image, thus avoiding a bias towards a specifically chosen reference time point. Both spatial and temporal smoothness of the transformations are taken into account. Optionally, cyclic motion can be imposed, which can be useful for visualization (viewing the segmentation sequentially) or model building purposes. The method is based on a 3D (2D+time) or 4D (3D+time) free-form B-spline deformation model, a similarity metric that minimizes the intensity variances over time and constrained optimization using a stochastic gradient descent method with adaptive step size estimation. The method was quantitatively compared with existing registration techniques on synthetic data and 3D+t computed tomography data of the lungs. This showed subvoxel accuracy while delivering smooth transformations, and high consistency of the registration results. Furthermore, the accuracy of semi-automatic derivation of left ventricular volume curves from 3D+t computed tomography angiography data of the heart was evaluated. On average, the deviation from the curves derived from the manual annotations was approximately 3%. The potential of the method for other imaging modalities was shown on 2D+t ultrasound and 2D+t magnetic resonance images. The software is publicly available as an extension to the registration package elastix. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Mobile Collocated Interactions With Wearables

    DEFF Research Database (Denmark)

    Lucero, Andrés; Wilde, Danielle; Robinson, Simon

    2015-01-01

    Research on mobile collocated interactions has been looking at situations in which collocated users engage in collaborative activities using their mobile devices, thus going from personal/individual toward shared/multiuser experiences and interactions. However, computers are getting smaller, more...

  18. Collocation Impact on Team Effectiveness

    Directory of Open Access Journals (Sweden)

    M Eccles

    2010-11-01

    Full Text Available The collocation of software development teams is common, specially in agile software development environments. However little is known about the impact of collocation on the team’s effectiveness. This paper explores the impact of collocating agile software development teams on a number of team effectiveness factors. The study focused on South African software development teams and gathered data through the use of questionnaires and interviews. The key finding was that collocation has a positive impact on a number of team effectiveness factors which can be categorised under team composition, team support, team management and structure and team communication. Some of the negative impact collocation had on team effectiveness relate to the fact that team members perceived that less emphasis was placed on roles, that morale of the group was influenced by individuals, and that collocation was invasive, reduced level of privacy and increased frequency of interruptions. Overall through it is proposed that companies should consider collocating their agile software development teams, as collocation might leverage overall team effectiveness.

  19. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    Science.gov (United States)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  20. Deconvolution using thin-plate splines

    International Nuclear Information System (INIS)

    Toussaint, Udo v.; Gori, Silvio

    2007-01-01

    The ubiquitous problem of estimating 2-dimensional profile information from a set of line integrated measurements is tackled with Bayesian probability theory by exploiting prior information about local smoothness. For this purpose thin-plate-splines (the 2-D minimal curvature analogue of cubic-splines in 1-D) are employed. The optimal number of support points required for inversion of 2-D tomographic problems is determined using model comparison. Properties of this approach are discussed and the question of suitable priors is addressed. Finally, we illustrated the properties of this approach with 2-D inversion results using data from line-integrated measurements from fusion experiments

  1. Productive knowledge of collocations may predict academic literacy

    Directory of Open Access Journals (Sweden)

    Van Dyk, Tobie

    2016-12-01

    Full Text Available The present study examines the relationship between productive knowledge of collocations and academic literacy among first year students at North-West University. Participants were administered a collocation test, the items of which were selected from Nation’s (2006 word frequency bands, i.e. the 2000-word, 3000-word, 5000-word bands; and the Academic Word List (Coxhead, 2000. The scores from the collocation test were compared to those from the Test of Academic Literacy Levels (version administered in 2012. The results of this study indicate that, overall, knowledge of collocations is significantly correlated with academic literacy, which is also observed at each of the frequency bands from which the items were selected. These results support Nizonkiza’s (2014 findings that a significant correlation between mastery of collocations of words from the Academic Word List and academic literacy exists; which is extended here to words from other frequency bands. They also confirm previous findings that productive knowledge of collocations increases alongside overall proficiency (cf. Gitsaki, 1999; Bonk, 2001; Eyckmans et al., 2004; Boers et al., 2006; Nizonkiza, 2011; among others. This study therefore concludes that growth in productive knowledge of collocations may entail growth in academic literacy; suggesting that productive use of collocations is linked to academic literacy to a considerable extent. In light of these findings, teaching strategies aimed to assist first year students meet academic demands posed by higher education and avenues to explore for further research are discussed. Especially, we suggest adopting a productive oriented approach to teaching collocations, which we believe may prove useful.

  2. Interpolation of natural cubic spline

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    1992-01-01

    Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.

  3. Translating English Idioms and Collocations

    Directory of Open Access Journals (Sweden)

    Rochayah Machali

    2004-01-01

    Full Text Available Learners of English should be made aware of the nature, types, and use of English idioms. This paper disensses the nature of idioms and collocations and translation issues related to them

  4. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    Science.gov (United States)

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  5. B-splines and Faddeev equations

    International Nuclear Information System (INIS)

    Huizing, A.J.

    1990-01-01

    Two numerical methods for solving the three-body equations describing relativistic pion deuteron scattering have been investigated. For separable two body interactions these equations form a set of coupled one-dimensional integral equations. They are plagued by singularities which occur in the kernel of the integral equations as well as in the solution. The methods to solve these equations differ in the way they treat the singularities. First the Fuda-Stuivenberg method is discussed. The basic idea of this method is an one time iteration of the set of integral equations to treat the logarithmic singularities. In the second method, the spline method, the unknown solution is approximated by splines. Cubic splines have been used with cubic B-splines as basis. If the solution is approximated by a linear combination of basis functions, an integral equation can be transformed into a set of linear equations for the expansion coefficients. This set of linear equations is solved by standard means. Splines are determined by points called knots. A proper choice of splines to approach the solution stands for a proper choice of the knots. The solution of the three-body scattering equations has a square root behaviour at a certain point. Hence it was investigated how the knots should be chosen to approximate the square root function by cubic B-splines in an optimal way. Before applying this method to solve numerically the three-body equations describing pion-deuteron scattering, an analytically solvable example has been constructed with a singularity structure of both kernel and solution comparable to those of the three-body equations. The accuracy of the numerical solution was determined to a large extent by the accuracy of the approximation of the square root part. The results for a pion laboratory energy of 47.4 MeV agree very well with those from literature. In a complete calculation for 47.7 MeV the spline method turned out to be a factor thousand faster than the Fuda

  6. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  7. Spline techniques for magnetic fields

    International Nuclear Information System (INIS)

    Aspinall, J.G.

    1984-01-01

    This report is an overview of B-spline techniques, oriented toward magnetic field computation. These techniques form a powerful mathematical approximating method for many physics and engineering calculations. In section 1, the concept of a polynomial spline is introduced. Section 2 shows how a particular spline with well chosen properties, the B-spline, can be used to build any spline. In section 3, the description of how to solve a simple spline approximation problem is completed, and some practical examples of using splines are shown. All these sections deal exclusively in scalar functions of one variable for simplicity. Section 4 is partly digression. Techniques that are not B-spline techniques, but are closely related, are covered. These methods are not needed for what follows, until the last section on errors. Sections 5, 6, and 7 form a second group which work toward the final goal of using B-splines to approximate a magnetic field. Section 5 demonstrates how to approximate a scalar function of many variables. The necessary mathematics is completed in section 6, where the problems of approximating a vector function in general, and a magnetic field in particular, are examined. Finally some algorithms and data organization are shown in section 7. Section 8 deals with error analysis

  8. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  9. Straight-sided Spline Optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using design modifications, that do not change the spline load carrying capacity, it is shown that large...

  10. Crosstalk statistics via collocation method

    NARCIS (Netherlands)

    Diouf, F.; Canavero, Flavio

    2009-01-01

    A probabilistic model for the evaluation of transmission lines crosstalk is proposed. The geometrical parameters are assumed to be unknown and the exact solution is decomposed into two functions, one depending solely on the random parameters and the other on the frequency. The stochastic collocation

  11. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  12. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.; Carroll, R.J.; Wand, M.P.

    2010-01-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  13. On Characterization of Quadratic Splines

    DEFF Research Database (Denmark)

    Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong

    2005-01-01

    that the representation can be refined in a neighborhood of a non-degenerate point and a set of non-degenerate minimizers. Based on these characterizations, many existing algorithms for specific convex quadratic splines are also finite convergent for a general convex quadratic spline. Finally, we study the relationship...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general....

  14. Stochastic Collocation Applications in Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    Dragan Poljak

    2018-01-01

    Full Text Available The paper reviews the application of deterministic-stochastic models in some areas of computational electromagnetics. Namely, in certain problems there is an uncertainty in the input data set as some properties of a system are partly or entirely unknown. Thus, a simple stochastic collocation (SC method is used to determine relevant statistics about given responses. The SC approach also provides the assessment of related confidence intervals in the set of calculated numerical results. The expansion of statistical output in terms of mean and variance over a polynomial basis, via SC method, is shown to be robust and efficient approach providing a satisfactory convergence rate. This review paper provides certain computational examples from the previous work by the authors illustrating successful application of SC technique in the areas of ground penetrating radar (GPR, human exposure to electromagnetic fields, and buried lines and grounding systems.

  15. Reference frame access under the effects of great earthquakes: a least squares collocation approach for non-secular post-seismic evolution

    Science.gov (United States)

    Gómez, D. D.; Piñón, D. A.; Smalley, R.; Bevis, M.; Cimbaro, S. R.; Lenzano, L. E.; Barón, J.

    2016-03-01

    The 2010, (Mw 8.8) Maule, Chile, earthquake produced large co-seismic displacements and non-secular, post-seismic deformation, within latitudes 28°S-40°S extending from the Pacific to the Atlantic oceans. Although these effects are easily resolvable by fitting geodetic extended trajectory models (ETM) to continuous GPS (CGPS) time series, the co- and post-seismic deformation cannot be determined at locations without CGPS (e.g., on passive geodetic benchmarks). To estimate the trajectories of passive geodetic benchmarks, we used CGPS time series to fit an ETM that includes the secular South American plate motion and plate boundary deformation, the co-seismic discontinuity, and the non-secular, logarithmic post-seismic transient produced by the earthquake in the Posiciones Geodésicas Argentinas 2007 (POSGAR07) reference frame (RF). We then used least squares collocation (LSC) to model both the background secular inter-seismic and the non-secular post-seismic components of the ETM at the locations without CGPS. We tested the LSC modeled trajectories using campaign and CGPS data that was not used to generate the model and found standard deviations (95 % confidence level) for position estimates for the north and east components of 3.8 and 5.5 mm, respectively, indicating that the model predicts the post-seismic deformation field very well. Finally, we added the co-seismic displacement field, estimated using an elastic finite element model. The final, trajectory model allows accessing the POSGAR07 RF using post-Maule earthquake coordinates within 5 cm for ˜ 91 % of the passive test benchmarks.

  16. Measuring receptive collocational competence across proficiency levels

    Directory of Open Access Journals (Sweden)

    Déogratias Nizonkiza

    2015-12-01

    Full Text Available The present study investigates, (i English as Foreign Language (EFL learners’ receptive collocational knowledge growth in relation to their linguistic proficiency level; (ii how much receptive collocational knowledge is acquired as proficiency develops; and (iii the extent to which receptive knowledge of collocations of EFL learners varies across word frequency bands. A proficiency measure and a collocation test were administered to English majors at the University of Burundi. Results of the study suggest that receptive collocational competence develops alongside EFL learners’ linguistic proficiency; which lends empirical support to Gyllstad (2007, 2009 and Author (2011 among others, who reported similar findings. Furthermore, EFL learners’ collocations growth seems to be quantifiable wherein both linguistic proficiency level and word frequency occupy a crucial role. While more gains in terms of collocations that EFL learners could potentially add as a result of change in proficiency are found at lower levels of proficiency; collocations of words from more frequent word bands seem to be mastered first, and more gains are found at more frequent word bands. These results confirm earlier findings on the non-linearity nature of vocabulary growth (cf. Meara 1996 and the fundamental role played by frequency in word knowledge for vocabulary in general (Nation 1983, 1990, Nation and Beglar 2007, which are extended here to collocations knowledge.

  17. USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Otgon Cristian

    2011-07-01

    Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third

  18. A meshless scheme for partial differential equations based on multiquadric trigonometric B-spline quasi-interpolation

    International Nuclear Information System (INIS)

    Gao Wen-Wu; Wang Zhi-Gang

    2014-01-01

    Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)

  19. Improving academic literacy by teaching collocations | Nizonkiza ...

    African Journals Online (AJOL)

    Stellenbosch Papers in Linguistics ... Abstract. This study explores the effect of teaching collocations on building academic vocabulary and hence improving academic writing abilities. ... They were presented with a completion task and an essay-writing task before and after being exposed to a collocation-based syllabus.

  20. Supporting Collocation Learning with a Digital Library

    Science.gov (United States)

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…

  1. Measuring receptive collocational competence across proficiency ...

    African Journals Online (AJOL)

    The present study investigates (i) English as Foreign Language (EFL) learners' receptive collocational knowledge growth in relation to their linguistic proficiency level; (ii) how much receptive collocational knowledge is acquired as linguistic proficiency develops; and (iii) the extent to which receptive knowledge of ...

  2. "Minimum input, maximum output, indeed!" Teaching Collocations ...

    African Journals Online (AJOL)

    Fifty-nine EFL college students participated in the study, and they received two 75-minute instructions between pre- and post-tests: one on the definition of colloca-tion and its importance, and the other on the skill of looking up collocational information in the Naver Dictionary — an English–Korean online dictionary. During ...

  3. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  4. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  5. Symmetric, discrete fractional splines and Gabor systems

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2006-01-01

    In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....

  6. Isogeometric analysis using T-splines

    KAUST Repository

    Bazilevs, Yuri

    2010-01-01

    We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.

  7. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    Science.gov (United States)

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…

  8. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  9. Spline and spline wavelet methods with applications to signal and image processing

    CERN Document Server

    Averbuch, Amir Z; Zheludev, Valery A

    This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...

  10. GUESSING VERB-ADVERB COLLOCATIONS: ARAB EFL ...

    African Journals Online (AJOL)

    user

    In the sections to follow, the concept and meaning of collocation is defined ... expressions (Alexander 1984); formulaic language or speech (Weinert 1995); multi- ... Two further studies reported Arab EFL learners' overall ignorance of col-.

  11. The Effect of Input Enhancement of Collocations in Reading on Collocation Learning and Retention of EFL Learners

    Science.gov (United States)

    Goudarzi, Zahra; Moini, M. Raouf

    2012-01-01

    Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…

  12. Construction of local integro quintic splines

    Directory of Open Access Journals (Sweden)

    T. Zhanlav

    2016-06-01

    Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.

  13. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  14. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  15. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    International Nuclear Information System (INIS)

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-01-01

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data

  16. Optimization of straight-sided spline design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using different design modifications, that do not change the spline load carrying capacity, it is shown...

  17. Comparative Performance of Complex-Valued B-Spline and Polynomial Models Applied to Iterative Frequency-Domain Decision Feedback Equalization of Hammerstein Channels.

    Science.gov (United States)

    Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J

    2017-12-01

    Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.

  18. The relationship between productive knowledge of collocations and ...

    African Journals Online (AJOL)

    This research explores tertiary level L2 students' mastery of the collocations pertaining to the Academic Word List (AWL) and the extent to which their knowledge of collocations grows alongside their academic literacy. A collocation test modelled on Laufer and Nation (1999), with target words selected from Coxhead's (2000) ...

  19. Testing controlled productive knowledge of adverb-verb collocations ...

    African Journals Online (AJOL)

    The study also reveals that controlled productive knowledge of adverbverb collocations is less problematic. Based on these results, teaching strategies aimed at improving the use of adverb-verb collocations among EFL users are proposed. Keywords: academic writing, adverb-verb collocations, productive knowledge of ...

  20. The structure of an Afrikaans collocation and phrase dictionary | Otto ...

    African Journals Online (AJOL)

    As one of the target groups is unsophisticated learners with a limited grammatical background, the ideal would be to enter lexical collocations both at their bases and at the collocators. To save space however, more information such as examples could then be provided at the bases only. Grammatical collocations should be ...

  1. The Presentation and Treatment of Collocations as Secondary ...

    African Journals Online (AJOL)

    Although the discussion primarily focuses on printed dictionaries proposals are also made for the presentation of collocations in online dictionaries. Keywords: Article structure, collocation, complex collocation, cotext, example sentences, integrated microstructure, non-grouped ordering, search zone, semi-integrated ...

  2. Measuring receptive collocational competence across proficiency ...

    African Journals Online (AJOL)

    Kate H

    frequency bands. A proficiency measure and a collocation test were administered to English ... battery may negatively impact the test-takers' performance. ..... examples. The major finding is that raising learners' awareness constitutes the best way forward ..... Amsterdam: John Benjamins Publishing Company. Green, R.

  3. Improving academic literacy by teaching collocations

    African Journals Online (AJOL)

    Kate H

    version of McCarthy and O'Dell's (2005) collocation web model were the techniques adopted ... both cued recall and essay writing, supporting earlier findings (cf. ..... from a 'holistic' representation of formulaic sequences in memory” (Boers et al. ... their study indicate that non-native speakers also retain words as they appear ...

  4. Evaluating a new test of whole English collocations

    DEFF Research Database (Denmark)

    Revier, Robert Lee

    2009-01-01

    in their own right and, as such, feature formal, semantic, and usage properties similar to those borne by single words. Third, the semantic properties of the constituent words that combine to form collocations are likely to play a role in EFL learners' ability to 'produce' English collocations. Forth, testing...... of L2 collocation knowledge needs to focus on the recognition and production of whole collocations. It is this set of assumptions that the new collocation test presented in this chapter is desined to probe. More specifically, the test is designed to assess L2 learners' productive knowledge of whole...

  5. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  6. Implementation of exterior complex scaling in B-splines to solve atomic and molecular collision problems

    International Nuclear Information System (INIS)

    McCurdy, C William; MartIn, Fernando

    2004-01-01

    B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling

  7. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.; Martinez, Josue G.; Carroll, Raymond J.; Sorensen, Danny C.

    2010-01-01

    in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between

  8. Multidimensional splines for modeling FET nonlinearities

    Energy Technology Data Exchange (ETDEWEB)

    Barby, J A

    1986-01-01

    Circuit simulators like SPICE and timing simulators like MOTIS are used extensively for critical path verification of integrated circuits. MOSFET model evaluation dominates the run time of these simulators. Changes in technology results in costly updates, since modifications require reprogramming of the functions and their derivatives. The computational cost of MOSFET models can be reduced by using multidimensional polynomial splines. Since simulators based on the Newton Raphson algorithm require the function and first derivative, quadratic splines are sufficient for this purpose. The cost of updating the MOSFET model due to technology changes is greatly reduced since splines are derived from a set of points. Crucial for convergence speed of simulators is the fact that MOSFET characteristic equations are monotonic. This must be maintained by any simulation model. The splines the author designed do maintain monotonicity.

  9. On convexity and Schoenberg's variation diminishing splines

    International Nuclear Information System (INIS)

    Feng, Yuyu; Kozak, J.

    1992-11-01

    In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs

  10. Variational collocation on finite intervals

    International Nuclear Information System (INIS)

    Amore, Paolo; Cervantes, Mayra; Fernandez, Francisco M

    2007-01-01

    In this paper, we study a set of functions, defined on an interval of finite width, which are orthogonal and which reduce to the sinc functions when the appropriate limit is taken. We show that these functions can be used within a variational approach to obtain accurate results for a variety of problems. We have applied them to the interpolation of functions on finite domains and to the solution of the Schroedinger equation, and we have compared the performance of the present approach with others

  11. Relation work in collocated and distributed collaboration

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Jensen, Rasmus Eskild; Bjørn, Pernille

    2014-01-01

    Creating social ties are important for collaborative work; however, in geographically distributed organizations e.g. global software development, making social ties requires extra work: Relation work. We find that characteristics of relation work as based upon shared history and experiences......, emergent in personal and often humorous situations. Relation work is intertwined with other activities such as articulation work and it is rhythmic by following the work patterns of the participants. By comparing how relation work is conducted in collocated and geographically distributed settings we...... in this paper identify basic differences in relation work. Whereas collocated relation work is spontaneous, place-centric, and yet mobile, relation work in a distributed setting is semi-spontaneous, technology-mediated, and requires extra efforts....

  12. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    Science.gov (United States)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  13. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  14. Semantic Analysis of Verbal Collocations with Lexical Functions

    CERN Document Server

    Gelbukh, Alexander

    2013-01-01

    This book is written for both linguists and computer scientists working in the field of artificial intelligence as well as to anyone interested in intelligent text processing. Lexical function is a concept that formalizes semantic and syntactic relations between lexical units. Collocational relation is a type of institutionalized lexical relations which holds between the base and its partner in a collocation. Knowledge of collocation is important for natural language processing because collocation comprises the restrictions on how words can be used together. The book shows how collocations can be annotated with lexical functions in a computer readable dictionary - allowing their precise semantic analysis in texts and their effective use in natural language applications including parsers, high quality machine translation, periphrasis system and computer-aided learning of lexica. The books shows how to extract collocations from corpora and annotate them with lexical functions automatically. To train algorithms,...

  15. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  16. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Science.gov (United States)

    2010-10-01

    ... standards or any other performance standards. An incumbent LEC that denies collocation of a competitor's equipment, citing safety standards, must provide to the competitive LEC within five business days of the... incumbent LEC contends the competitor's equipment fails to meet. This affidavit must set forth in detail...

  17. An isogeometric boundary element method for electromagnetic scattering with compatible B-spline discretizations

    Science.gov (United States)

    Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.

    2018-06-01

    We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.

  18. Verb-Noun Collocation Proficiency and Academic Years

    Directory of Open Access Journals (Sweden)

    Fatemeh Ebrahimi-Bazzaz

    2014-01-01

    Full Text Available Generally vocabulary and collocations in particular have significant roles in language proficiency. A collocation includes two words that are frequently joined concurrently in the memory of native speakers. There have been many linguistic studies trying to define, to describe, and to categorise English collocations. It contains grammatical collocations and lexical collocations which include nouns, adjectives, verbs, and adverb. In the context of a foreign language environment such as Iran, collocational proficiency can be useful because it helps the students improve their language proficiency. This paper investigates the possible relationship between verb-noun collocation proficiency among students from one academic year to the next. To reach this goal, a test of verb-noun collocations was administered to Iranian learners. The participants in the study were 212 Iranian students in an Iranian university. They were selected from the second term of freshman, sophomore, junior, and senior years. The students’ age ranged from 18 to 35.The results of ANOVA showed there was variability in the verb-noun collocations proficiency within each academic year and between the four academic years. The results of a post hoc multiple comparison tests demonstrated that the means are significantly different between the first year and the third and fourth years, and between the third and the fourth academic year; however, students require at least two years to show significant development in verb-noun collocation proficiency. These findings provided a vital implication that lexical collocations are learnt and developed through four academic years of university, but requires at least two years showing significant development in the language proficiency.

  19. Optimization of Low-Thrust Spiral Trajectories by Collocation

    Science.gov (United States)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  20. A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data

    KAUST Repository

    Babuška, Ivo; Nobile, Fabio; Tempone, Raul

    2010-01-01

    This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.

  1. A stochastic collocation method for the second order wave equation with a discontinuous random speed

    KAUST Repository

    Motamed, Mohammad

    2012-08-31

    In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical space and depends on a finite number of random variables. The numerical scheme consists of a finite difference or finite element method in the physical space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space. This approach leads to the solution of uncoupled deterministic problems as in the Monte Carlo method. We consider both full and sparse tensor product spaces of orthogonal polynomials. We provide a rigorous convergence analysis and demonstrate different types of convergence of the probability error with respect to the number of collocation points for full and sparse tensor product spaces and under some regularity assumptions on the data. In particular, we show that, unlike in elliptic and parabolic problems, the solution to hyperbolic problems is not in general analytic with respect to the random variables. Therefore, the rate of convergence may only be algebraic. An exponential/fast rate of convergence is still possible for some quantities of interest and for the wave solution with particular types of data. We present numerical examples, which confirm the analysis and show that the collocation method is a valid alternative to the more traditional Monte Carlo method for this class of problems. © 2012 Springer-Verlag.

  2. Robust Topology Optimization Based on Stochastic Collocation Methods under Loading Uncertainties

    Directory of Open Access Journals (Sweden)

    Qinghai Zhao

    2015-01-01

    Full Text Available A robust topology optimization (RTO approach with consideration of loading uncertainties is developed in this paper. The stochastic collocation method combined with full tensor product grid and Smolyak sparse grid transforms the robust formulation into a weighted multiple loading deterministic problem at the collocation points. The proposed approach is amenable to implementation in existing commercial topology optimization software package and thus feasible to practical engineering problems. Numerical examples of two- and three-dimensional topology optimization problems are provided to demonstrate the proposed RTO approach and its applications. The optimal topologies obtained from deterministic and robust topology optimization designs under tensor product grid and sparse grid with different levels are compared with one another to investigate the pros and cons of optimization algorithm on final topologies, and an extensive Monte Carlo simulation is also performed to verify the proposed approach.

  3. COLLOCATION PHRASES IN RELATION TO OTHER LEXICAL PHRASES IN CROATIAN

    Directory of Open Access Journals (Sweden)

    Goranka Blagus Bartolec

    2012-01-01

    Full Text Available The paper analyzes the semantic and lexicological aspects of collocation phrases in Croatian with tendency to separate them from other lexical phrases in Croatian (terms, idioms, names. The collocation phrase is defined as a special lexical phrase on a syntagmatic level, based on the semantic correlation of the two individual lexical components in which their meanings are specified.

  4. Learning and Teaching L2 Collocations: Insights from Research

    Science.gov (United States)

    Szudarski, Pawel

    2017-01-01

    The aim of this article is to present and summarize the main research findings in the area of learning and teaching second language (L2) collocations. Being a large part of naturally occurring language, collocations and other types of multiword units (e.g., idioms, phrasal verbs, lexical bundles) have been identified as important aspects of L2…

  5. Teachability of collocations: The role of word frequency counts ...

    African Journals Online (AJOL)

    ... beginner/low-intermediate students and only exceed the 2 000-word band from the upper-intermediate learning stage onwards, a suggestion in line with Nation's (2006) discussion on how to teach vocabulary. Keywords: collocation size, controlled productive knowledge, teachability of collocations, word frequency counts, ...

  6. First-year University Students' Productive Knowledge of Collocations ...

    African Journals Online (AJOL)

    The present study examines productive knowledge of collocations of tertiary-level second language (L2) learners of English in an attempt to make estimates of the size of their knowledge. Participants involved first-year students at North-West University who sat a collocation test modelled on that developed by Laufer and ...

  7. Collocations and Grammatical Patterns in a Multilingual Online Term ...

    African Journals Online (AJOL)

    This article considers the importance of including various types of collocations in a terminological database, with the aim of making this information available to the user via the user interface. We refer specifically to the inclusion of empirical and phraseological collocations, and information on grammatical patterning.

  8. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    Science.gov (United States)

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…

  9. PEMODELAN REGRESI SPLINE (Studi Kasus: Herpindo Jaya Cabang Ngaliyan

    Directory of Open Access Journals (Sweden)

    I MADE BUDIANTARA PUTRA

    2015-06-01

    Full Text Available Regression analysis is a method of data analysis to describe the relationship between response variables and predictor variables. There are two approaches to estimating the regression function. They are parametric and nonparametric approaches. The parametric approach is used when the relationship between the predictor variables and the response variables are known or the shape of the regression curve is known. Meanwhile, the nonparametric approach is used when the form of the relationship between the response and predictor variables is unknown or no information about the form of the regression function. The aim of this study are to determine the best spline nonparametric regression model on data of quality of the product, price, and advertising on purchasing decisions of Yamaha motorcycle with optimal knots point and to compare it with the multiple regression linear based on the coefficient of determination (R2 and mean square error (MSE. Optimal knot points are defined by two point knots. The result of this analysis is that for this data multiple regression linear is better than the spline regression one.

  10. Stabilized Discretization in Spline Element Method for Solution of Two-Dimensional Navier-Stokes Problems

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.

  11. Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...

    African Journals Online (AJOL)

    Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.

  12. Spline smoothing of histograms by linear programming

    Science.gov (United States)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  13. Scripted Bodies and Spline Driven Animation

    DEFF Research Database (Denmark)

    Erleben, Kenny; Henriksen, Knud

    2002-01-01

    In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...

  14. A Study on the Phenomenon of Collocations: Methodology of Teaching English and German Collocations to Russian Students

    Science.gov (United States)

    Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.

    2016-01-01

    Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…

  15. The core spline method for solution of quantum-mechanical systems of differential equations for bound states

    International Nuclear Information System (INIS)

    Aleksandrov, L.; Drenska, M.; Karadzhov, D.

    1986-01-01

    A generalization of the core spline method is given in the case of solution of the general bound state problem for a system of M linear differential equations with coefficients depending on the spectral parameter. The recursion scheme for construction of basic splines is described. The wave functions are expressed as linear combinations of basic splines, which are approximate partial solutions of the system. The spectral parameter (the eigenvalue) is determined from the condition for existence of a nontrivial solution of a (MxM) linear algebraic system at the last collocation point. The nontrivial solutions of this system determine (M - 1) coefficients of the linear spans, expressing the wave functions. The last unknown coefficient is determined from a boundary (or normalization) condition for the system. The computational aspects of the method are discussed, in particular, its concrete algorithmic realization used in the RODSOL program. The numerical solution of the Dirac system for the bound states of a hydrogen atom is given is an example

  16. Applications of the spline filter for areal filtration

    International Nuclear Information System (INIS)

    Tong, Mingsi; Zhang, Hao; Ott, Daniel; Chu, Wei; Song, John

    2015-01-01

    This paper proposes a general use isotropic areal spline filter. This new areal spline filter can achieve isotropy by approximating the transmission characteristic of the Gaussian filter. It can also eliminate the effect of void areas using a weighting factor, and resolve end-effect issues by applying new boundary conditions, which replace the first order finite difference in the traditional spline formulation. These improvements make the spline filter widely applicable to 3D surfaces and extend the applications of the spline filter in areal filtration. (technical note)

  17. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  18. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    Science.gov (United States)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  19. Asymmetric free vibrations of laminated annular cross-ply circular plates including the effects of shear deformation and rotary inertia. Spline method

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, K.K.; Kim, Kyung Su; Lee, Jang Hyun [Inha Univ., Incheon (Korea). Dept. of Naval Architecture and Ocean Engineering

    2009-12-15

    Asymmetric free vibrations of annular cross-ply circular plates are studied using spline function approximation. The governing equations are formulated including the effects of shear deformation and rotary inertia. Assumptions are made to study the cross-ply layered plates. A system of coupled differential equations are obtained in terms of displacement functions and rotational functions. These functions are approximated using Bickley- type spline functions of suitable order. Then the system is converted into the eigenvalue problem by applying the point collocation technique and suitable boundary conditions. Parametric studies have been made to investigate the effect of transverse shear deformation and rotary inertia on frequency parameter with respect to the circumferential node number, radii ratio and thickness to radius ratio for both symmetric and anti-symmetric cross-ply plates using various types of material properties. (orig.)

  20. B-spline tight frame based force matching method

    Science.gov (United States)

    Yang, Jianbin; Zhu, Guanhua; Tong, Dudu; Lu, Lanyuan; Shen, Zuowei

    2018-06-01

    In molecular dynamics simulations, compared with popular all-atom force field approaches, coarse-grained (CG) methods are frequently used for the rapid investigations of long time- and length-scale processes in many important biological and soft matter studies. The typical task in coarse-graining is to derive interaction force functions between different CG site types in terms of their distance, bond angle or dihedral angle. In this paper, an ℓ1-regularized least squares model is applied to form the force functions, which makes additional use of the B-spline wavelet frame transform in order to preserve the important features of force functions. The B-spline tight frames system has a simple explicit expression which is useful for representing our force functions. Moreover, the redundancy of the system offers more resilience to the effects of noise and is useful in the case of lossy data. Numerical results for molecular systems involving pairwise non-bonded, three and four-body bonded interactions are obtained to demonstrate the effectiveness of our approach.

  1. Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation

    Directory of Open Access Journals (Sweden)

    Ahmad Bilfarsah

    2005-04-01

    Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.

  2. Modified Chebyshev Collocation Method for Solving Differential Equations

    Directory of Open Access Journals (Sweden)

    M Ziaul Arif

    2015-05-01

    Full Text Available This paper presents derivation of alternative numerical scheme for solving differential equations, which is modified Chebyshev (Vieta-Lucas Polynomial collocation differentiation matrices. The Scheme of modified Chebyshev (Vieta-Lucas Polynomial collocation method is applied to both Ordinary Differential Equations (ODEs and Partial Differential Equations (PDEs cases. Finally, the performance of the proposed method is compared with finite difference method and the exact solution of the example. It is shown that modified Chebyshev collocation method more effective and accurate than FDM for some example given.

  3. Fermat collocation method for the solutions of nonlinear system of second order boundary value problems

    Directory of Open Access Journals (Sweden)

    Salih Yalcinbas

    2016-01-01

    Full Text Available In this study, a numerical approach is proposed to obtain approximate solutions of nonlinear system of second order boundary value problem. This technique is essentially based on the truncated Fermat series and its matrix representations with collocation points. Using the matrix method, we reduce the problem system of nonlinear algebraic equations. Numerical examples are also given to demonstrate the validity and applicability of the presented technique. The method is easy to implement and produces accurate results.

  4. Corpus-Aided Business English Collocation Pedagogy: An Empirical Study in Chinese EFL Learners

    Science.gov (United States)

    Chen, Lidan

    2017-01-01

    This study reports an empirical study of an explicit instruction of corpus-aided Business English collocations and verifies its effectiveness in improving learners' collocation awareness and learner autonomy, as a result of which is significant improvement of learners' collocation competence. An eight-week instruction in keywords' collocations,…

  5. Numerical Solution of the Blasius Viscous Flow Problem by Quartic B-Spline Method

    Directory of Open Access Journals (Sweden)

    Hossein Aminikhah

    2016-01-01

    Full Text Available A numerical method is proposed to study the laminar boundary layer about a flat plate in a uniform stream of fluid. The presented method is based on the quartic B-spline approximations with minimizing the error L2-norm. Theoretical considerations are discussed. The computed results are compared with some numerical results to show the efficiency of the proposed approach.

  6. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  7. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  8. Application of multivariate splines to discrete mathematics

    OpenAIRE

    Xu, Zhiqiang

    2005-01-01

    Using methods developed in multivariate splines, we present an explicit formula for discrete truncated powers, which are defined as the number of non-negative integer solutions of linear Diophantine equations. We further use the formula to study some classical problems in discrete mathematics as follows. First, we extend the partition function of integers in number theory. Second, we exploit the relation between the relative volume of convex polytopes and multivariate truncated powers and giv...

  9. Tomographic reconstruction with B-splines surfaces

    International Nuclear Information System (INIS)

    Oliveira, Eric F.; Dantas, Carlos C.; Melo, Silvio B.; Mota, Icaro V.; Lira, Mailson

    2011-01-01

    Algebraic reconstruction techniques when applied to a limited number of data usually suffer from noise caused by the process of correction or by inconsistencies in the data coming from the stochastic process of radioactive emission and oscillation equipment. The post - processing of the reconstructed image with the application of filters can be done to mitigate the presence of noise. In general these processes also attenuate the discontinuities present in edges that distinguish objects or artifacts, causing excessive blurring in the reconstructed image. This paper proposes a built-in noise reduction at the same time that it ensures adequate smoothness level in the reconstructed surface, representing the unknowns as linear combinations of elements of a piecewise polynomial basis, i.e. a B-splines basis. For that, the algebraic technique ART is modified to accommodate the first, second and third degree bases, ensuring C 0 , C 1 and C 2 smoothness levels, respectively. For comparisons, three methodologies are applied: ART, ART post-processed with regular B-splines filters (ART*) and the proposed method with the built-in B-splines filter (BsART). Simulations with input data produced from common mathematical phantoms were conducted. For the phantoms used the BsART method consistently presented the smallest errors, among the three methods. This study has shown the superiority of the change made to embed the filter in the ART when compared to the post-filtered ART. (author)

  10. Geometric and computer-aided spline hob modeling

    Science.gov (United States)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  11. RBF Multiscale Collocation for Second Order Elliptic Boundary Value Problems

    KAUST Repository

    Farrell, Patricio; Wendland, Holger

    2013-01-01

    In this paper, we discuss multiscale radial basis function collocation methods for solving elliptic partial differential equations on bounded domains. The approximate solution is constructed in a multilevel fashion, each level using compactly

  12. Collocations and grammatical patterns in a Multilingual Online Term ...

    African Journals Online (AJOL)

    user

    equivalents for key concepts in the African languages, but also additional con- ... for, inter alia, computational identification and extraction of collocations exist; .... sult' is to be followed by a prepositional phrase in which the preposition is.

  13. Recent advances in radial basis function collocation methods

    CERN Document Server

    Chen, Wen; Chen, C S

    2014-01-01

    This book surveys the latest advances in radial basis function (RBF) meshless collocation methods which emphasis on recent novel kernel RBFs and new numerical schemes for solving partial differential equations. The RBF collocation methods are inherently free of integration and mesh, and avoid tedious mesh generation involved in standard finite element and boundary element methods. This book focuses primarily on the numerical algorithms, engineering applications, and highlights a large class of novel boundary-type RBF meshless collocation methods. These methods have shown a clear edge over the traditional numerical techniques especially for problems involving infinite domain, moving boundary, thin-walled structures, and inverse problems. Due to the rapid development in RBF meshless collocation methods, there is a need to summarize all these new materials so that they are available to scientists, engineers, and graduate students who are interest to apply these newly developed methods for solving real world’s ...

  14. The Use of English Collocations in Reader's Digest

    OpenAIRE

    Sinaga, Yudita Putri Nurani; Sinaga, Lidiman Sahat Martua

    2014-01-01

    This descriptive qualitative study is aimed at identifying and describing the types of free collocations found in the articles of Reader's Digest. By taking a sample of ten articles from different months for each year since 2003 up to 2012, it was found all the four productive free collocations were in the data. Type 4 (Determiner + Adjective + Noun) was the dominant type (53.92 %). This was possible because the adjective in the pattern included the present participle and past participle of v...

  15. A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty

    KAUST Repository

    Malenova, G.; Motamed, M.; Runborg, O.; Tempone, Raul

    2016-01-01

    We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.

  16. A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty

    KAUST Repository

    Malenova, G.

    2016-09-08

    We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.

  17. Approximate solutions of the hyperchaotic Rössler system by using the Bessel collocation scheme

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2015-02-01

    Full Text Available The purpose of this study is to give a Bessel polynomial approximation for the solutions of the hyperchaotic Rössler system. For this purpose, the Bessel collocation method applied to different problems is developed for the mentioned system. This method is based on taking the truncated Bessel expansions of the functions in the hyperchaotic Rössler systems. The suggested secheme converts the problem into a system of nonlinear algebraic equations by means of the matrix operations and collocation points, The accuracy and efficiency of the proposed approach are demonstrated by numerical applications and performed with the help of a computer code written in Maple. Also, comparison between our method and the differential transformation method is made with the accuracy of solutions.

  18. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    Energy Technology Data Exchange (ETDEWEB)

    Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  19. Transport survey calculations using the spectral collocation method

    International Nuclear Information System (INIS)

    Painter, S.L.; Lyon, J.F.

    1989-01-01

    A novel transport survey code has been developed and is being used to study the sensitivity of stellarator reactor performance to various transport assumptions. Instead of following one of the usual approaches, the steady-state transport equation are solved in integral form using the spectral collocation method. This approach effectively combine the computational efficiency of global models with the general nature of 1-D solutions. A compact torsatron reactor test case was used to study the convergence properties and flexibility of the new method. The heat transport model combined Shaing's model for ripple-induced neoclassical transport, the Chang-Hinton model for axisymmetric neoclassical transport, and neoalcator scaling for anomalous electron heat flux. Alpha particle heating, radiation losses, classical electron-ion heat flow, and external heating were included. For the test problem, the method exhibited some remarkable convergence properties. As the number of basis functions was increased, the maximum, pointwise error in the integrated power balance decayed exponentially until the numerical noise level as reached. Better than 10% accuracy in the globally-averaged quantities was achieved with only 5 basis functions; better than 1% accuracy was achieved with 10 basis functions. The numerical method was also found to be very general. Extreme temperature gradients at the plasma edge which sometimes arise from the neoclassical models and are difficult to resolve with finite-difference methods were easily resolved. 8 refs., 6 figs

  20. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  1. Image edges detection through B-Spline filters

    International Nuclear Information System (INIS)

    Mastropiero, D.G.

    1997-01-01

    B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab

  2. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    Science.gov (United States)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  3. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  4. Continuous Groundwater Monitoring Collocated at USGS Streamgages

    Science.gov (United States)

    Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.

    2012-12-01

    USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June

  5. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael

    2015-10-24

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  6. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2015-01-01

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  7. Examination of influential observations in penalized spline regression

    Science.gov (United States)

    Türkan, Semra

    2013-10-01

    In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.

  8. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  9. Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  10. Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-06

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  11. Multi-Index Stochastic Collocation for random PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-03-28

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  12. Multi-Index Stochastic Collocation for random PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  13. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  14. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    Science.gov (United States)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and

  15. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  16. Statistical analysis of sediment toxicity by additive monotone regression splines

    NARCIS (Netherlands)

    Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.

    2002-01-01

    Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this

  17. Color management with a hammer: the B-spline fitter

    Science.gov (United States)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  18. Exponential B-splines and the partition of unity property

    DEFF Research Database (Denmark)

    Christensen, Ole; Massopust, Peter

    2012-01-01

    We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...

  19. About some properties of bivariate splines with shape parameters

    Science.gov (United States)

    Caliò, F.; Marchetti, E.

    2017-07-01

    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  20. LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Dokken

    2015-08-01

    Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  1. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    Science.gov (United States)

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2016-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.

  2. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    Science.gov (United States)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  3. Testing knowledge of whole English collocations available for use in written production

    DEFF Research Database (Denmark)

    Revier, Robert Lee

    2014-01-01

    Testing knowledge of whole English collocations available for use in written production: Developing tests for use with intermediate and advanced Danish learners (dansk resume nedenfor) The present foreign language acquisition research derives its impetus from four assumptions regarding knowledge...... of English collocations. These are: (a) collocation knowledge can be conceptualized as an independent knowledge construct, (b) collocations are lexical items in their own right, (c) testing of collocation knowledge should also target knowledge of whole collocations, and (d) the learning burden of a whole...... the development of Danish EFL learners’ productive knowledge of whole English collocations. Five empirical studies were designed to generate information that would shed light on the reliability and validity of the CONTRIX as a measure of collocation knowledge available for use in written production. Study 1...

  4. Landmark-based elastic registration using approximating thin-plate splines.

    Science.gov (United States)

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  5. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  6. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  7. Presenting collocates in a dictionary of computing and the Internet according to user needs

    DEFF Research Database (Denmark)

    Leroyer, Patrick; L'Homme, Marie-Claude; Jousse, Anne-Laure

    2011-01-01

    This paper presents a novel method for organizing and presenting collocations in a specialized dictionary of computing and the Internet. This work is undertaken in order to meet a specific user need, i.e. that of searching for a collocate (or a short list of collocates) that expresses a specific...

  8. Examining Second Language Receptive Knowledge of Collocation and Factors That Affect Learning

    Science.gov (United States)

    Nguyen, Thi My Hang; Webb, Stuart

    2017-01-01

    This study investigated Vietnamese EFL learners' knowledge of verb-noun and adjective-noun collocations at the first three 1,000 word frequency levels, and the extent to which five factors (node word frequency, collocation frequency, mutual information score, congruency, and part of speech) predicted receptive knowledge of collocation. Knowledge…

  9. Modelling and Simulation of a Packed Bed of Pulp Fibers Using Mixed Collocation Method

    Directory of Open Access Journals (Sweden)

    Ishfaq Ahmad Ganaie

    2013-01-01

    Full Text Available A convenient computational approach for solving mathematical model related to diffusion dispersion during flow through packed bed is presented. The algorithm is based on the mixed collocation method. The method is particularly useful for solving stiff system arising in chemical and process engineering. The convergence of the method is found to be of order 2 using the roots of shifted Chebyshev polynomial. Model is verified using the literature data. This method has provided a convenient check on the accuracy of the results for wide range of parameters, namely, Peclet numbers. Breakthrough curves are plotted to check the effect of Peclet number on average and exit solute concentrations.

  10. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    Science.gov (United States)

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Meshfree Local Radial Basis Function Collocation Method with Image Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Baek, Seung Ki; Kim, Minjae [Pukyong National University, Busan (Korea, Republic of)

    2017-07-15

    We numerically solve two-dimensional heat diffusion problems by using a simple variant of the meshfree local radial-basis function (RBF) collocation method. The main idea is to include an additional set of sample nodes outside the problem domain, similarly to the method of images in electrostatics, to perform collocation on the domain boundaries. We can thereby take into account the temperature profile as well as its gradients specified by boundary conditions at the same time, which holds true even for a node where two or more boundaries meet with different boundary conditions. We argue that the image method is computationally efficient when combined with the local RBF collocation method, whereas the addition of image nodes becomes very costly in case of the global collocation. We apply our modified method to a benchmark test of a boundary value problem, and find that this simple modification reduces the maximum error from the analytic solution significantly. The reduction is small for an initial value problem with simpler boundary conditions. We observe increased numerical instability, which has to be compensated for by a sufficient number of sample nodes and/or more careful parameter choices for time integration.

  12. A Line-Tau Collocation Method for Partial Differential Equations ...

    African Journals Online (AJOL)

    This paper deals with the numerical solution of second order linear partial differential equations with the use of the method of lines coupled with the tau collocation method. The method of lines is used to convert the partial differential equation (PDE) to a sequence of ordinary differential equations (ODEs) which is then ...

  13. Multimodal interaction design in collocated mobile phone use

    NARCIS (Netherlands)

    El-Ali, A.; Lucero, A.; Aaltonen, V.

    2011-01-01

    In the context of the Social and Spatial Interactions (SSI) platform, we explore how multimodal interaction design (input and output) can augment and improve the experience of collocated, collaborative activities using mobile phones. Based largely on our prototype evaluations, we reflect on and

  14. Sinc-collocation method for solving the Blasius equation

    International Nuclear Information System (INIS)

    Parand, K.; Dehghan, Mehdi; Pirkhedri, A.

    2009-01-01

    Sinc-collocation method is applied for solving Blasius equation which comes from boundary layer equations. It is well known that sinc procedure converges to the solution at an exponential rate. Comparison with Howarth and Asaithambi's numerical solutions reveals that the proposed method is of high accuracy and reduces the solution of Blasius' equation to the solution of a system of algebraic equations.

  15. Lexical richness and collocational competence in second-language writing

    NARCIS (Netherlands)

    Vedder, I.; Benigno, V.

    2016-01-01

    In this article we report on an experiment set up to investigate lexical richness and collocational competence in the written production of 39 low-intermediate and intermediate learners of Italian L2. Lexical richness was assessed by means of a lexical profiling method inspired by Laufer and Nation

  16. Higher order multipoles and splines in plasma simulations

    International Nuclear Information System (INIS)

    Okuda, H.; Cheng, C.Z.

    1978-01-01

    The reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and the spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular the spline method may be useful in three-dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length. (Auth.)

  17. Higher-order multipoles and splines in plasma simulations

    International Nuclear Information System (INIS)

    Okuda, H.; Cheng, C.Z.

    1977-12-01

    Reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular, spline method may be useful in three dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length

  18. Detrending of non-stationary noise data by spline techniques

    International Nuclear Information System (INIS)

    Behringer, K.

    1989-11-01

    An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs

  19. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines

    Science.gov (United States)

    Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya

    2017-11-01

    Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1

  20. Modeling terminal ballistics using blending-type spline surfaces

    Science.gov (United States)

    Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune

    2014-12-01

    We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.

  1. Topology optimization based on spline-based meshfree method using topological derivatives

    International Nuclear Information System (INIS)

    Hur, Junyoung; Youn, Sung-Kie; Kang, Pilseong

    2017-01-01

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  2. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    Science.gov (United States)

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  3. Finite nucleus Dirac mean field theory and random phase approximation using finite B splines

    International Nuclear Information System (INIS)

    McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)

    1989-01-01

    We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results

  4. Topology optimization based on spline-based meshfree method using topological derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Junyoung; Youn, Sung-Kie [KAIST, Daejeon (Korea, Republic of); Kang, Pilseong [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2017-05-15

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  5. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  6. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  7. RBF Multiscale Collocation for Second Order Elliptic Boundary Value Problems

    KAUST Repository

    Farrell, Patricio

    2013-01-01

    In this paper, we discuss multiscale radial basis function collocation methods for solving elliptic partial differential equations on bounded domains. The approximate solution is constructed in a multilevel fashion, each level using compactly supported radial basis functions of smaller scale on an increasingly fine mesh. On each level, standard symmetric collocation is employed. A convergence theory is given, which builds on recent theoretical advances for multiscale approximation using compactly supported radial basis functions. We are able to show that the convergence is linear in the number of levels. We also discuss the condition numbers of the arising systems and the effect of simple, diagonal preconditioners, now proving rigorously previous numerical observations. © 2013 Society for Industrial and Applied Mathematics.

  8. A collocation finite element method with prior matrix condensation

    International Nuclear Information System (INIS)

    Sutcliffe, W.J.

    1977-01-01

    For thin shells with general loading, sixteen degrees of freedom have been used for a previous finite element solution procedure using a Collocation method instead of the usual variational based procedures. Although the number of elements required was relatively small, nevertheless the final matrix for the simultaneous solution of all unknowns could become large for a complex compound structure. The purpose of the present paper is to demonstrate a method of reducing the final matrix size, so allowing solution for large structures with comparatively small computer storage requirements while retaining the accuracy given by high order displacement functions. Collocation points, a number are equilibrium conditions which must be satisfied independently of the overall compatibility of forces and deflections for a complete structure. (Auth.)

  9. Part 6. Internationalization and collocation of FBR fuel cycle facilities

    International Nuclear Information System (INIS)

    Stevenson, M.G.; Abramson, P.B.; LeSage, L.G.

    1980-01-01

    This report examines some of the non-proliferation, technical, and institutional aspects of internationalization and/or collocation of major facilities of the Fast Breeder Reactor (FBR) fuel cycle. The national incentives and disincentives for establishment of FBR Fuel Cycle Centers are enumerated. The technical, legal, and administrative considerations in determining the feasibility of FBR Fuel Cycle Centers are addressed by making comparisons with Light Water Reactor (LWR) centers which have been studied in detail by the IAEA and UNSRC

  10. Numerical simulation of GEW equation using RBF collocation method

    Directory of Open Access Journals (Sweden)

    Hamid Panahipour

    2012-08-01

    Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.

  11. Application of collocation meshless method to eigenvalue problem

    International Nuclear Information System (INIS)

    Saitoh, Ayumu; Matsui, Nobuyuki; Itoh, Taku; Kamitani, Atsushi; Nakamura, Hiroaki

    2012-01-01

    The numerical method for solving the nonlinear eigenvalue problem has been developed by using the collocation Element-Free Galerkin Method (EFGM) and its performance has been numerically investigated. The results of computations show that the approximate solution of the nonlinear eigenvalue problem can be obtained stably by using the developed method. Therefore, it can be concluded that the developed method is useful for solving the nonlinear eigenvalue problem. (author)

  12. Teaching vocabulary using collocations versus using definitions in EFL classes

    OpenAIRE

    Altınok, Şerife İper

    2000-01-01

    Ankara : Institute of Economics and Social Sciences of Bilkent Univ., 2000. Thesis (Master's) -- Bilkent University, 2000. Includes bibliographical references leaves 40-43 Teaching words in collocations is a comparatively new technique and it is accepted as an effective one in vocabulary teaching. The purpose of this study was to find out whether teaching vocabulary would result in better learning and remembering vocabulary items. This study investigated the differences betw...

  13. Let's collocate: student generated worksheets as a motivational tool

    OpenAIRE

    Simpson, Adam John

    2006-01-01

    This article discusses the process of producing collocation worksheets and the values of these worksheets as a motivational tool within a tertiary level preparatory English program. Firstly, the method by which these worksheets were produced is described, followed by an analysis of their effectiveness as a resource in terms of student motivation, personalisation, involvement in the development of the curriculum and in raising awareness of corpus linguistics and its applications.

  14. Pseudospectral collocation methods for fourth order differential equations

    Science.gov (United States)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  15. Composite multi-modal vibration control for a stiffened plate using non-collocated acceleration sensor and piezoelectric actuator

    International Nuclear Information System (INIS)

    Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong

    2014-01-01

    A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations. (paper)

  16. ANALYSIS OF SPECIALISED COLLOCATIONS IN THE AREA OF REMOTE SENSING IN THE PERSPECTIVE OF PHRASEOLOGY

    Directory of Open Access Journals (Sweden)

    Diva Cardoso de CAMARGO

    2013-12-01

    Full Text Available The aim of this research is to build and analyze a parallel corpus in the field of remote sensing in order to identify, according to its frequency, specialized collocations in English and then search for their equivalents in Portuguese. The research is based on the interdisciplinary approach of Corpus-Based Translation Studies (BAKER, 1995; CAMARGO, 2007, Corpus Linguistics (BERBER SARDINHA, 2004; TOGNINI-BONELLI, 2001, Phraseology (ORENHA-OTTAIANO, 2009; PAVEL, 1993, and some principles of Terminology (BARROS, 2004. For manipulating the corpora, the program WordSmith Tools (SCOTT, 2012 version 6.0 is used. To support this study, two comparable corpora in English and Portuguese were also built from articles published in both national and international journals in remote sensing. The results show that the collocations in Portuguese seem to be still in the process of conventionalization, as the translators made use of greater variation in their translational options, which can be a way to make the text clearer for the reader.

  17. An empirical understanding of triple collocation evaluation measure

    Science.gov (United States)

    Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang

    2013-04-01

    Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC

  18. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  19. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    International Nuclear Information System (INIS)

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-01-01

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  20. Vocabulary and Receptive Knowledge of English Collocations among Swedish Upper Secondary School Students

    OpenAIRE

    Bergström, Kerstin

    2008-01-01

    The aim of this study is to examine the vocabulary and receptive collocation knowledge in English among Swedish upper secondary school students. The primary material consists of two vocabulary tests, one collocation test, and a background questionnaire. The first research question concerns whether the students who receive a major part of their education in English have a higher level of vocabulary and receptive collocation knowledge in English than those who are taught primarily in Swedish. T...

  1. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  2. Illumination estimation via thin-plate spline interpolation.

    Science.gov (United States)

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  3. Bayesian Analysis for Penalized Spline Regression Using WinBUGS

    Directory of Open Access Journals (Sweden)

    Ciprian M. Crainiceanu

    2005-09-01

    Full Text Available Penalized splines can be viewed as BLUPs in a mixed model framework, which allows the use of mixed model software for smoothing. Thus, software originally developed for Bayesian analysis of mixed models can be used for penalized spline regression. Bayesian inference for nonparametric models enjoys the flexibility of nonparametric models and the exact inference provided by the Bayesian inferential machinery. This paper provides a simple, yet comprehensive, set of programs for the implementation of nonparametric Bayesian analysis in WinBUGS. Good mixing properties of the MCMC chains are obtained by using low-rank thin-plate splines, while simulation times per iteration are reduced employing WinBUGS specific computational tricks.

  4. Point based interactive image segmentation using multiquadrics splines

    Science.gov (United States)

    Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna

    2017-05-01

    Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6

  5. Data assimilation using Bayesian filters and B-spline geological models

    KAUST Repository

    Duan, Lian

    2011-04-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  6. Data assimilation using Bayesian filters and B-spline geological models

    International Nuclear Information System (INIS)

    Duan Lian; Farmer, Chris; Hoteit, Ibrahim; Luo Xiaodong; Moroz, Irene

    2011-01-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  7. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  8. The treatment of lexical collocations in EFL coursebooks in the Estonian secondary school context

    Directory of Open Access Journals (Sweden)

    Liina Vassiljev

    2015-04-01

    Full Text Available The article investigates lexical collocations encountered in English as a Foreign Language (EFL instruction in Estonian upper secondary schools. This is achieved through a statistical analysis of collocations featuring in three coursebooks where the collocations found are analysed in terms of their type, frequency and usefulness index by studying them through an online language corpus (Collins Wordbanks Online. The coursebooks are systematically compared and contrasted relying upon the data gathered. The results of the study reveal that the frequency and range of lexical collocations in a language corpus have not been regarded as an essential criterion for their selection and practice by any of the coursebook authors under discussion.

  9. 2-Dimensional B-Spline Algorithms with Applications to Ray Tracing in Media of Spatially-Varying Refractive Index

    Science.gov (United States)

    2007-08-01

    In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods) rather than linear trajectories. The...coupling the radiative transport solution into heat transfer and damage models. 15. SUBJECT TERMS: B-Splines, Ray-Tracing, Eikonal Equation...multi-layer biological tissue model. In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods

  10. Simulation of electrically driven jet using Chebyshev collocation method

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The model of electrically driven jet is governed by a series of quasi 1D dimensionless partial differential equations(PDEs).Following the method of lines,the Chebyshev collocation method is employed to discretize the PDEs and obtain a system of differential-algebraic equations(DAEs).By differentiating constrains in DAEs twice,the system is transformed into a set of ordinary differential equations(ODEs) with invariants.Then the implicit differential equations solver "ddaskr" is used to solve the ODEs and ...

  11. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    Science.gov (United States)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.

    2017-01-01

    This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  12. Fourier analysis of finite element preconditioned collocation schemes

    Science.gov (United States)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  13. A nodal collocation approximation for the multi-dimensional PL equations - 2D applications

    International Nuclear Information System (INIS)

    Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdu, G.

    2008-01-01

    A classical approach to solve the neutron transport equation is to apply the spherical harmonics method obtaining a finite approximation known as the P L equations. In this work, the derivation of the P L equations for multi-dimensional geometries is reviewed and a nodal collocation method is developed to discretize these equations on a rectangular mesh based on the expansion of the neutronic fluxes in terms of orthogonal Legendre polynomials. The performance of the method and the dominant transport Lambda Modes are obtained for a homogeneous 2D problem, a heterogeneous 2D anisotropic scattering problem, a heterogeneous 2D problem and a benchmark problem corresponding to a MOX fuel reactor core

  14. Counterexamples to the B-spline Conjecture for Gabor Frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Nielsen, Kamilla Haahr

    2016-01-01

    The frame set conjecture for B-splines Bn, n≥2, states that the frame set is the maximal set that avoids the known obstructions. We show that any hyperbola of the form ab=r, where r is a rational number smaller than one and a and b denote the sampling and modulation rates, respectively, has infin...

  15. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    preferred which preserves some of the characteristics of the function to be interpolated. In order to tackle such ... Shape preserving properties of the rational (cubic/quadratic) spline interpolant have been studied ... tension parameters which is used to interpolate the given monotonic data is described in. [6]. Shape preserving ...

  16. Spline function fit for multi-sets of correlative data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhou Hongmo

    1992-01-01

    The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation

  17. Thin-plate spline quadrature of geodetic integrals

    Science.gov (United States)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  18. Differential constraints for bounded recursive identification with multivariate splines

    NARCIS (Netherlands)

    De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2011-01-01

    The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able

  19. Multivariate Epi-splines and Evolving Function Identification Problems

    Science.gov (United States)

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  20. Splines under tension for gridding three-dimensional data

    International Nuclear Information System (INIS)

    Brand, H.R.; Frazer, J.W.

    1982-01-01

    By use of the splines-under-tension concept, a simple algorithm has been developed for the three-dimensional representation of nonuniformly spaced data. The representations provide useful information to the experimentalist when he is attempting to understand the results obtained in a self-adaptive experiment. The shortcomings of the algorithm are discussed as well as the advantages

  1. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    International Nuclear Information System (INIS)

    Lee, Wei-Ming

    2012-01-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical–cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  2. Gimme Context – towards New Domain-Specific Collocational Dictionaries

    Directory of Open Access Journals (Sweden)

    Sylvana Krausse

    2011-04-01

    Full Text Available The days of traditional drudgery-filled lexicography are long gone. Fortunately today, computers help in the enormous task of storing and analysing language in order to condense and store the found information in the form of dictionaries. In this paper, the way from a corpus to a small domain-specific collocational dictionary will be described and thus exemplified based on the example of the domain-specific language of mining reclamation, which can be duplicated for other specific languages too. So far, domain-specific dictionaries are mostly rare as their creation is very labour- and thus cost-effective and all too often they are just a collection of terms plus translation without any information on how to use them in speech. Particular small domains which do not involve a lot of users have been disregarded by lexicographers as there is also always the question of how well it sells afterwards. Following this, I will describe the creation of a small collocational dictionary on mining reclamation language which is based on the consequent use of corpus information. It is relatively quick to realize in the design phase and is thought to provide the sort of linguistic information engineering experts need when they communicate in English or read specialist texts in the specific domain.

  3. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    Energy Technology Data Exchange (ETDEWEB)

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  4. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    International Nuclear Information System (INIS)

    Alwan, Aravind; Aluru, N.R.

    2013-01-01

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems

  5. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    Science.gov (United States)

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  6. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    Science.gov (United States)

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  7. Not Just "Small Potatoes": Knowledge of the Idiomatic Meanings of Collocations

    Science.gov (United States)

    Macis, Marijana; Schmitt, Norbert

    2017-01-01

    This study investigated learner knowledge of the figurative meanings of 30 collocations that can be both literal and figurative. One hundred and seven Chilean Spanish-speaking university students of English were asked to complete a meaning-recall collocation test in which the target items were embedded in non-defining sentences. Results showed…

  8. Responding to Research Challenges Related to Studying L2 Collocational Use in Professional Academic Discourse

    DEFF Research Database (Denmark)

    Henriksen, Birgit; Westbrook, Pete

    2017-01-01

    and classifying collocations used by L2 speakers in advanced, domain-specific oral academic discourse. The main findings seem to suggest that to map an informant’s complete collocational use and to get an understanding of disciplinary differences, we need to not only take account of general, academic and domain...

  9. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age

    Directory of Open Access Journals (Sweden)

    Marko Wilke

    2018-02-01

    Full Text Available This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1–75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender as well as technical (field strength, data quality predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php. Keywords: MRI template creation, Multivariate adaptive regression splines, DARTEL, Structural MRI

  10. Verb-Noun Collocations in Written Discourse of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Fatemeh Ebrahimi-Bazzaz

    2015-07-01

    Full Text Available When native speakers of English write, they employ both grammatical rules and collocations. Collocations are words that are present in the memory of native speakers as ready-made prefabricated chunks. Non-native speakers who wish to acquire native-like fluency should give appropriate attention to collocations in writing in order not to produce sentences that native speakers may consider odd. The present study tries to explore the use of verb-noun collocations in written discourse of English as foreign language (EFL among Iranian EFL learners from one academic year to the next in Iran. To measure the use of verb-noun collocations in written discourse, there was a 60-minute task of writing story  based on a series of six pictures whereby for each picture, three verb-noun collocations were measured, and nouns were provided to limit the choice of collocations. The results of the statistical analysis of ANOVA for the research question indicated that there was a significant difference in the use of lexical verb-noun collocations in written discourse both between and within the four academic years. The results of a post hoc multiple comparison tests confirmed that the means are significantly different between the first year and the third and fourth years, between the second and the fourth, and between the third and the fourth academic year which indicate substantial development in verb-noun collocation proficiency.  The vital implication is that the learners could use verb-noun collocations in productive skill of writing.

  11. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    Science.gov (United States)

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.

  12. Iteratively re-weighted bi-cubic spline representation of corneal topography and its comparison to the standard methods.

    Science.gov (United States)

    Zhu, Zhongxia; Janunts, Edgar; Eppig, Timo; Sauer, Tomas; Langenbucher, Achim

    2010-01-01

    The aim of this study is to represent the corneal anterior surface by utilizing radius and height data extracted from a TMS-2N topographic system with three different mathematical approaches and to simulate the visual performance. An iteratively re-weighted bi-cubic spline method is introduced for the local representation of the corneal surface. For comparison, two standard mathematical global representation approaches are used: the general quadratic function and the higher order Taylor polynomial approach. First, these methods were applied in simulations using three corneal models. Then, two real eye examples were investigated: one eye with regular astigmatism, and one eye which had undergone refractive surgery. A ray-tracing program was developed to evaluate the imaging performance of these examples with each surface representation strategy at the best focus plane. A 6 mm pupil size was chosen for the simulation. The fitting error (deviation) of the presented methods was compared. It was found that the accuracy of the topography representation was worst using the quadratic function and best with bicubic spline. The quadratic function cannot precisely describe the irregular corneal shape. In order to achieve a sub-micron fitting precision, the Taylor polynomial's order selection behaves adaptive to the corneal shape. The bi-cubic spline shows more stable performance. Considering the visual performance, the more precise the cornea representation is, the worse the visual performance is. The re-weighted bi-cubic spline method is a reasonable and stable method for representing the anterior corneal surface in measurements using a Placido-ring-pattern-based corneal topographer. Copyright © 2010. Published by Elsevier GmbH.

  13. Analytic regularization of uniform cubic B-spline deformation fields.

    Science.gov (United States)

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  14. MHD stability analysis using higher order spline functions

    Energy Technology Data Exchange (ETDEWEB)

    Ida, Akihiro [Department of Energy Engineering and Science, Graduate School of Engineering, Nagoya University, Nagoya, Aichi (Japan); Todoroki, Jiro; Sanuki, Heiji

    1999-04-01

    The eigenvalue problem of the linearized magnetohydrodynamic (MHD) equation is formulated by using higher order spline functions as the base functions of Ritz-Galerkin approximation. When the displacement vector normal to the magnetic surface (in the magnetic surface) is interpolated by B-spline functions of degree p{sub 1} (degree p{sub 2}), which is continuously c{sub 1}-th (c{sub 2}-th) differentiable on neighboring finite elements, the sufficient conditions for the good approximation is given by p{sub 1}{>=}p{sub 2}+1, c{sub 1}{<=}c{sub 2}+1, (c{sub 1}{>=}1, p{sub 2}{>=}c{sub 2}{>=}0). The influence of the numerical integration upon the convergence of calculated eigenvalues is discussed. (author)

  15. Data approximation using a blending type spline construction

    International Nuclear Information System (INIS)

    Dalmo, Rune; Bratlie, Jostein

    2014-01-01

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C k -smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences

  16. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  17. PEMODELAN B-SPLINE DAN MARS PADA NILAI UJIAN MASUK TERHADAP IPK MAHASISWA JURUSAN DISAIN KOMUNIKASI VISUAL UK. PETRA SURABAYA

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2006-01-01

    Full Text Available Regression analysis is constructed for capturing the influences of independent variables to dependent ones. It can be done by looking at the relationship between those variables. This task of approximating the mean function can be done essentially in two ways. The quiet often use parametric approach is to assume that the mean curve has some prespecified functional forms. Alternatively, nonparametric approach, .i.e., without reference to a specific form, is used when there is no information of the regression function form (Haerdle, 1990. Therefore nonparametric approach has more flexibilities than the parametric one. The aim of this research is to find the best fit model that captures relationship between admission test score to the GPA. This particular data was taken from the Department of Design Communication and Visual, Petra Christian University, Surabaya for year 1999. Those two approaches were used here. In the parametric approach, we use simple linear, quadric cubic regression, and in the nonparametric ones, we use B-Spline and Multivariate Adaptive Regression Splines (MARS. Overall, the best model was chosen based on the maximum determinant coefficient. However, for MARS, the best model was chosen based on the GCV, minimum MSE, maximum determinant coefficient. Abstract in Bahasa Indonesia : Analisa regresi digunakan untuk melihat pengaruh variabel independen terhadap variabel dependent dengan terlebih dulu melihat pola hubungan variabel tersebut. Hal ini dapat dilakukan dengan melalui dua pendekatan. Pendekatan yang paling umum dan seringkali digunakan adalah pendekatan parametrik. Pendekatan parametrik mengasumsikan bentuk model sudah ditentukan. Apabila tidak ada informasi apapun tentang bentuk dari fungsi regresi, maka pendekatan yang digunakan adalah pendekatan nonparametrik. (Haerdle, 1990. Karena pendekatan tidak tergantung pada asumsi bentuk kurva tertentu, sehingga memberikan fleksibelitas yang lebih besar. Tujuan penelitian ini

  18. Analysis of an upstream weighted collocation approximation to the transport equation

    International Nuclear Information System (INIS)

    Shapiro, A.; Pinder, G.F.

    1981-01-01

    The numerical behavior of a modified orthogonal collocation method, as applied to the transport equations, can be examined through the use of a Fourier series analysis. The necessity of such a study becomes apparent in the analysis of several techniques which emulate classical upstream weighting schemes. These techniques are employed in orthogonal collocation and other numerical methods as a means of handling parabolic partial differential equations with significant first-order terms. Divergent behavior can be shown to exist in one upstream weighting method applied to orthogonal collocation

  19. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  20. Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data

    Science.gov (United States)

    Luo, Zhen

    In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.

  1. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  2. From symplectic integrator to Poincare map: Spline expansion of a map generator in Cartesian coordinates

    International Nuclear Information System (INIS)

    Warnock, R.L.; Ellison, J.A.; Univ. of New Mexico, Albuquerque, NM

    1997-08-01

    Data from orbits of a symplectic integrator can be interpolated so as to construct an approximation to the generating function of a Poincare map. The time required to compute an orbit of the symplectic map induced by the generator can be much less than the time to follow the same orbit by symplectic integration. The construction has been carried out previously for full-turn maps of large particle accelerators, and a big saving in time (for instance a factor of 60) has been demonstrated. A shortcoming of the work to date arose from the use of canonical polar coordinates, which precluded map construction in small regions of phase space near coordinate singularities. This paper shows that Cartesian coordinates can also be used, thus avoiding singularities. The generator is represented in a basis of tensor product B-splines. Under weak conditions the spline expansion converges uniformly as the mesh is refined, approaching the exact generator of the Poincare map as defined by the symplectic integrator, in some parallelepiped of phase space centered at the origin

  3. Collocation mismatch uncertainties in satellite aerosol retrieval validation

    Science.gov (United States)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit

    2018-02-01

    Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the

  4. Application of a modified collocation method to the one dimensional, one group neutron transport equation

    International Nuclear Information System (INIS)

    Maschek, W.

    1976-07-01

    A modified collocation method is used for solving the one group criticality problem for a uniform multiplying slab. The critical parameters and the angular fluxes for a number of slabs are displayed and compared with previously published values. (orig.) [de

  5. An adaptive multi-element probabilistic collocation method for statistical EMC/EMI characterization

    KAUST Repository

    Yü cel, Abdulkadir C.; Bagci, Hakan; Michielssen, Eric

    2013-01-01

    polynomial chaos expansion of the observables. While constructing local polynomial expansions on each subdomain, a fast integral-equation-based deterministic field-cable-circuit simulator is used to compute the observable values at the collocation

  6. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2010-01-01

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods

  7. A stochastic collocation method for the second order wave equation with a discontinuous random speed

    KAUST Repository

    Motamed, Mohammad; Nobile, Fabio; Tempone, Raul

    2012-01-01

    In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical

  8. High-frequency collocations of nouns in research articles across eight disciplines

    Directory of Open Access Journals (Sweden)

    Matthew Peacock

    2012-04-01

    Full Text Available This paper describes a corpus-based analysis of the distribution of the high-frequency collocates of abstract nouns in 320 research articles across eight disciplines: Chemistry, Computer Science, Materials Science, Neuroscience, Economics, Language and Linguistics, Management, and Psychology. Disciplinary variation was also examined – very little previous research seems to have investigated this. The corpus was analysed using WordSmith Tools. The 16 highest-frequency nouns across all eight disciplines were identified, followed by the highest-frequency collocates for each noun. Five disciplines showed over 50% variance from the overall results. Conclusions are that the differing patterns revealed are disciplinary norms and represent standard terminology within the disciplines arising from the topics discussed, research methods, and content of discussions. It is also concluded that the collocations are an important part of the meanings and functions of the nouns, and that this evidence of sharp discipline differences underlines the importance of discipline-specific collocation research.

  9. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  10. A new class of interpolatory $L$-splines with adjoint end conditions

    OpenAIRE

    Bejancu, Aurelian; Al-Sahli, Reyouf S.

    2014-01-01

    A thin plate spline surface for interpolation of smooth transfinite data prescribed along concentric circles was recently proposed by Bejancu, using Kounchev's polyspline method. The construction of the new `Beppo Levi polyspline' surface reduces, via separation of variables, to that of a countable family of univariate $L$-splines, indexed by the frequency integer $k$. This paper establishes the existence, uniqueness and variational properties of the `Beppo Levi $L$-spline' schemes correspond...

  11. Communication Collocations of the Lexeme Geld in General and Business German

    Directory of Open Access Journals (Sweden)

    Mirna Hocenski-Dreiseidl

    2010-07-01

    Full Text Available The authors aim to analyse and compare the lexeme Geld and its collocations on the grammatical and semantic levels in general and in business German. A special emphasis will be put on the importance of the communicative function that this lexeme and its collocations have in the language of banking. The paper also has a practical purpose. Its applicability in teaching is envisaged to improve the communicative competence of students of economics.

  12. Block Hybrid Collocation Method with Application to Fourth Order Differential Equations

    Directory of Open Access Journals (Sweden)

    Lee Ken Yap

    2015-01-01

    Full Text Available The block hybrid collocation method with three off-step points is proposed for the direct solution of fourth order ordinary differential equations. The interpolation and collocation techniques are applied on basic polynomial to generate the main and additional methods. These methods are implemented in block form to obtain the approximation at seven points simultaneously. Numerical experiments are conducted to illustrate the efficiency of the method. The method is also applied to solve the fourth order problem from ship dynamics.

  13. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    Science.gov (United States)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  14. Translating Legal Collocations in Contract Agreements by Iraqi EFL Students-Translators

    Directory of Open Access Journals (Sweden)

    Muntaha A. Abdulwahid

    2017-01-01

    Full Text Available Legal translation of contract agreements is a challenge to translators as it involves combining the literary translation with the technical terminological precision. In translating legal contract agreements, a legal translator must utilize the lexical or syntactic precision and, more importantly, the pragmatic awareness of the context. This will guarantee an overall communicative process and avoid inconsistency in legal translation. However, the inability of the translator to meet these two functions in translating the contract item not only affects the contractors’ comprehension of the contract item but also affects the parties’ contractual obligations. In light of this, the purpose of this study was to find out how legal collocations used in contract agreements are translated from Arabic into English by student-translators in terms of (1 purely technical, (2 semi-technical, and (3 everyday vocabulary collocations. For the data collection, a multiple-choice collocation test was used to be answered by 35 EFL Iraqi undergraduate translator-students to decide on the aspects of weaknesses and strengths of their translation, thus decide on the aspects of correction. The findings showed that these students had serious problems in translating legal collocations as they lack the linguistic knowledge and pragmatic awareness needed to achieve the legal meaning and effect. They were also unable to make a difference among the three categories of legal collocations, purely technical, semi-technical, and everyday vocabulary collocations. These students should be exposed to more legal translation practices to obtain the required experience needed for their future career.

  15. Age of Acquisition Effects in Chinese EFL learners’ Delexicalized Verb and Collocation Acquisition

    Directory of Open Access Journals (Sweden)

    Miao Haiyan

    2015-05-01

    Full Text Available This paper investigates age of acquisition (AoA effects and the acquisition of delexicalized verbs and collocations in Chinese EFL learners, and explores the underlying reasons from the connectionist model for these learners’ acquisition characteristics. The data were collected through a translation test consisted of delexialized verb information section and English-Chinese and Chinese-English collocation parts, aiming to focus on Chinese EFL learners’ receptive and productive abilities respectively. As Chinese EFL is a nationally classroom-based practice beginning from early primary school, the pedagogical value and different phases of acquisition are thus taken into consideration in designing the translation test. Research results show that the effects of AoA are significant not only in the learners’ acquisition of individual delexicalized verbs but also in delexicalized collocations. Although learners have long begun to learn delexicalized verbs, their production indicates that early learning does not guarantee total acquisition, because their grasp of delexicalized verbs still stay at the senior middle school level. AoA effects significantly affect the recognition but not the production of collocations. Furthermore, a plateau effect occurs in learners’ acquisition of college-level delexicalized collocations, as their recognition and production have no processing advantages over earlier learned collocations.

  16. Evaluating the performance of collocated optical disdrometers: LPM and PARSIVEL

    Science.gov (United States)

    Angulo-Martinez, Marta; Begueria, Santiago; Latorre, Borja

    2017-04-01

    Optical disdrometers are present weather sensors with the ability of providing integrate information of precipitation like intensity and reflectivity together with discrete information of drop sizes and velocities distribution (DSVD) of the hydrometeors crossing the laser beam sampling area. These sensors constitute a step forward in comparison with pluviometers towards a more complete characterisation of precipitation. Their use is spreading in many research fields for several applications. Understanding the differences from one another helps in the election of the sensor and point out limitations to be fixed in future versions. Four collocated optical disdrometers, two Laser Precipitation Monitors (LPM-Thies Clima) and two PARSIVEL, 1-minute measurements of 800 natural rainfall events were compared. Results showed a general agreement in integrated variables, like intensity or liquid water content. Nevertheless, comparing raw data, as the number of particles and DSVD, great differences were found. LPM generally measures more and smaller drops than PARSIVEL and this difference increases with rainfall intensity. These results may affect especially the reflectivity value every disdrometer provide. A complete description of the measurements obtained, quantifiying the differences is provided, indicating their possible sources.

  17. Multi-element probabilistic collocation method in high dimensions

    International Nuclear Information System (INIS)

    Foo, Jasmine; Karniadakis, George Em

    2010-01-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order μ and the effective dimension ν, with ν<< N, and N the nominal dimension. Numerical tests for multi-dimensional integration and for stochastic elliptic problems suggest that ν≥μ for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  18. Collocated Dataglyphs for large-message storage and retrieval

    Science.gov (United States)

    Motwani, Rakhi C.; Breidenbach, Jeff A.; Black, John R.

    2004-06-01

    In contrast to the security and integrity of electronic files, printed documents are vulnerable to damage and forgery due to their physical nature. Researchers at Palo Alto Research Center utilize DataGlyph technology to render digital characteristics to printed documents, which provides them with the facility of tamper-proof authentication and damage resistance. This DataGlyph document is known as GlyphSeal. Limited DataGlyph carrying capacity per printed page restricted the application of this technology to a domain of graphically simple and small-sized single-paged documents. In this paper the authors design a protocol motivated by techniques from the networking domain and back-up strategies, which extends the GlyphSeal technology to larger-sized, graphically complex, multi-page documents. This protocol provides fragmentation, sequencing and data loss recovery. The Collocated DataGlyph Protocol renders large glyph messages onto multiple printed pages and recovers the glyph data from rescanned versions of the multi-page documents, even when pages are missing, reordered or damaged. The novelty of this protocol is the application of ideas from RAID to the domain of DataGlyphs. The current revision of this protocol is capable of generating at most 255 pages, if page recovery is desired and does not provide enough data density to store highly detailed images in a reasonable amount of page space.

  19. Automatic Shape Control of Triangular B-Splines of Arbitrary Topology

    Institute of Scientific and Technical Information of China (English)

    Ying He; Xian-Feng Gu; Hong Qin

    2006-01-01

    Triangular B-splines are powerful and flexible in modeling a broader class of geometric objects defined over arbitrary, non-rectangular domains. Despite their great potential and advantages in theory, practical techniques and computational tools with triangular B-splines are less-developed. This is mainly because users have to handle a large number of irregularly distributed control points over arbitrary triangulation. In this paper, an automatic and efficient method is proposed to generate visually pleasing, high-quality triangular B-splines of arbitrary topology. The experimental results on several real datasets show that triangular B-splines are powerful and effective in both theory and practice.

  20. Nested sparse grid collocation method with delay and transformation for subsurface flow and transport problems

    Science.gov (United States)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-06-01

    In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.

  1. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    Science.gov (United States)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  2. SPLPKG WFCMPR WFAPPX, Wilson-Fowler Spline Generator for Computer Aided Design And Manufacturing (CAD/CAM) Systems

    International Nuclear Information System (INIS)

    Fletcher, S.K.

    2002-01-01

    1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes

  3. Thin-plate spline analysis of mandibular growth.

    Science.gov (United States)

    Franchi, L; Baccetti, T; McNamara, J A

    2001-04-01

    The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.

  4. A Bayesian-optimized spline representation of the electrocardiogram

    International Nuclear Information System (INIS)

    Guilak, F G; McNames, J

    2013-01-01

    We introduce an implementation of a novel spline framework for parametrically representing electrocardiogram (ECG) waveforms. This implementation enables a flexible means to study ECG structure in large databases. Our algorithm allows researchers to identify key points in the waveform and optimally locate them in long-term recordings with minimal manual effort, thereby permitting analysis of trends in the points themselves or in metrics derived from their locations. In the work described here we estimate the location of a number of commonly-used characteristic points of the ECG signal, defined as the onsets, peaks, and offsets of the P, QRS, T, and R′ waves. The algorithm applies Bayesian optimization to a linear spline representation of the ECG waveform. The location of the knots—which are the endpoints of the piecewise linear segments used in the spline representation of the signal—serve as the estimate of the waveform’s characteristic points. We obtained prior information of knot times, amplitudes, and curvature from a large manually-annotated training dataset and used the priors to optimize a Bayesian figure of merit based on estimated knot locations. In cases where morphologies vary or are subject to noise, the algorithm relies more heavily on the estimated priors for its estimate of knot locations. We compared optimized knot locations from our algorithm to two sets of manual annotations on a prospective test data set comprising 200 beats from 20 subjects not in the training set. Mean errors of characteristic point locations were less than four milliseconds, and standard deviations of errors compared favorably against reference values. This framework can easily be adapted to include additional points of interest in the ECG signal or for other biomedical detection problems on quasi-periodic signals. (paper)

  5. An analytical investigation on unsteady motion of vertically falling spherical particles in non-Newtonian fluid by Collocation Method

    Directory of Open Access Journals (Sweden)

    M. Rahimi-Gorji

    2015-06-01

    Full Text Available An analytical investigation is applied for unsteady motion of a rigid spherical particle in a quiescent shear-thinning power-law fluid. The results were compared with those obtained from Collocation Method (CM and the established Numerical Method (Fourth order Runge–Kutta scheme. It was shown that CM gave accurate results. Collocation Method (CM and Numerical Method are used to solve the present problem. We obtained that the CM which was used to solve such nonlinear differential equation with fractional power is simpler and more accurate than series method such as HPM which was used in some previous works by others but the new method named Akbari-Ganji’s Method (AGM is an accurate and simple method which is slower than CM for solving such problems. The terminal settling velocity—that is the velocity at which the net forces on a falling particle eliminate—for three different spherical particles (made of plastic, glass and steel and three flow behavior index n, in three sets of power-law non-Newtonian fluids was investigated, based on polynomial solution (CM. Analytical results obtained indicated that the time of reaching the terminal velocity in a falling procedure is significantly increased with growing of the particle size that validated with Numerical Method. Further, with approaching flow behavior to Newtonian behavior from shear-thinning properties of flow (n → 1, the transient time to achieving the terminal settling velocity is decreased.

  6. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

    Directory of Open Access Journals (Sweden)

    Shengjun Liu

    2015-01-01

    Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

  7. THE INVESTIGATION OF PRODUCTIVE AND RECEPTIVE COMPETENCE IN V+N AND ADJ+N COLLOCATIONS AMONG INDONESIAN EFL LEARNERS

    Directory of Open Access Journals (Sweden)

    Saudin Saudin

    2017-05-01

    Full Text Available The important role of collocation in learners’ language proficiency has been acknowledged widely. In Systemic Functional Linguistics (SFL, collocation is known as one prominent member of the super-ordinate lexical cohesion, which contributes significantly to the textual coherence, together with grammatical cohesion and structural cohesion (Halliday & Hasan, 1985. Collocation is also viewed as the hallmark of truly advanced English learners since the higher the learners’ proficiency is, the more they tend to use collocation (Bazzaz & Samad, 2011; Hsu, 2007; Zhang, 1993. Further, knowledge of collocation is regarded as part of the native speakers’ communicative competence (Bazzaz & Samad, 2011; and lack of the knowledge is the most important sign of foreignness among foreign language learners (McArthur, 1992; McCarthy, 1990. Taking the importance of collocation into account, this study is aimed to shed light on Indonesian EFL learners’ levels of collocational competence. In the study, the collocational competence is restricted to v+n and adj+n of collocation but broken down into productive and receptive competence, about which little work has been done (Henriksen, 2013. For this purpose, 49 second-year students of an English department in a state polytechnic were chosen as the subjects. Two sets of tests (filling in the blanks and multiple-choice were administered to obtain the data of the subjects’ levels of productive and receptive competence and to gain information of which type was more problematic for the learners. The test instruments were designed by referring to Brashi’s (2006 test model, and Koya’s (2003. In the analysis of the data, interpretive-qualitative method was used primarily to obtain broad explanatory information. The data analysis showed that the scores of productive competence were lower than those of receptive competence in both v+n and adj+n collocation. The analysis also revealed that the scores of productive

  8. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    Science.gov (United States)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  9. Numerical solution of the Black-Scholes equation using cubic spline wavelets

    Science.gov (United States)

    Černá, Dana

    2016-12-01

    The Black-Scholes equation is used in financial mathematics for computation of market values of options at a given time. We use the θ-scheme for time discretization and an adaptive scheme based on wavelets for discretization on the given time level. Advantages of the proposed method are small number of degrees of freedom, high-order accuracy with respect to variables representing prices and relatively small number of iterations needed to resolve the problem with a desired accuracy. We use several cubic spline wavelet and multi-wavelet bases and discuss their advantages and disadvantages. We also compare an isotropic and anisotropic approach. Numerical experiments are presented for the two-dimensional Black-Scholes equation.

  10. Full-Wave Analysis of the Shielding Effectiveness of Thin Graphene Sheets with the 3D Unidirectionally Collocated HIE-FDTD Method

    Directory of Open Access Journals (Sweden)

    Arne Van Londersele

    2017-01-01

    Full Text Available Graphene-based electrical components are inherently multiscale, which poses a real challenge for finite-difference time-domain (FDTD solvers due to the stringent time step upper bound. Here, a unidirectionally collocated hybrid implicit-explicit (UCHIE FDTD method is put forward that exploits the planar structure of graphene to increase the time step by implicitizing the critical dimension. The method replaces the traditional Yee discretization by a partially collocated scheme that allows a more accurate numerical description of the material boundaries. Moreover, the UCHIE-FDTD method preserves second-order accuracy even for nonuniform discretization in the direction of collocation. The auxiliary differential equation (ADE approach is used to implement the graphene sheet as a dispersive Drude medium. The finite grid is terminated by a uniaxial perfectly matched layer (UPML to permit open-space simulations. Special care is taken to elaborate on the efficient implementation of the implicit update equations. The UCHIE-FDTD method is validated by computing the shielding effectiveness of a typical graphene sheet.

  11. SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1

    Directory of Open Access Journals (Sweden)

    Liviu GEAMBAŞU

    2010-12-01

    Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange

  12. A Simple Time Domain Collocation Method to Precisely Search for the Periodic Orbits of Satellite Relative Motion

    Directory of Open Access Journals (Sweden)

    Xiaokui Yue

    2014-01-01

    Full Text Available A numerical approach for obtaining periodic orbits of satellite relative motion is proposed, based on using the time domain collocation (TDC method to search for the periodic solutions of an exact J2 nonlinear relative model. The initial conditions for periodic relative orbits of the Clohessy-Wiltshire (C-W equations or Tschauner-Hempel (T-H equations can be refined with this approach to generate nearly bounded orbits. With these orbits, a method based on the least-squares principle is then proposed to generate projected closed orbit (PCO, which is a reference for the relative motion control. Numerical simulations reveal that the presented TDC searching scheme is effective and simple, and the projected closed orbit is very fuel saving.

  13. Regional Densification of a Global VTEC Model Based on B-Spline Representations

    Science.gov (United States)

    Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas

    2017-04-01

    The project OPTIMAP is a joint initiative of the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal of the project is the development of an operational tool for ionospheric mapping and prediction (OPTIMAP). Two key features of the project are the combination of different satellite observation techniques (GNSS, satellite altimetry, radio occultations and DORIS) and the regional densification as a remedy against problems encountered with the inhomogeneous data distribution. Since the data from space-geoscientific mission which can be used for modeling ionospheric parameters, such as the Vertical Total Electron Content (VTEC) or the electron density, are distributed rather unevenly over the globe at different altitudes, appropriate modeling approaches have to be developed to handle this inhomogeneity. Our approach is based on a two-level strategy. To be more specific, in the first level we compute a global VTEC model with a moderate regional and spectral resolution which will be complemented in the second level by a regional model in a densification area. The latter is a region characterized by a dense data distribution to obtain a high spatial and spectral resolution VTEC product. Additionally, the global representation means a background model for the regional one to avoid edge effects at the boundaries of the densification area. The presented approach based on a global and a regional model part, i.e. the consideration of a regional densification is called the Two-Level VTEC Model (TLVM). The global VTEC model part is based on a series expansion in terms of polynomial B-Splines in latitude direction and trigonometric B-Splines in longitude direction. The additional regional model part is set up by a series expansion in terms of polynomial B-splines for

  14. Trajectory Planning of Satellite Formation Flying using Nonlinear Programming and Collocation

    Directory of Open Access Journals (Sweden)

    Hyung-Chu Lim

    2008-12-01

    Full Text Available Recently, satellite formation flying has been a topic of significant research interest in aerospace society because it provides potential benefits compared to a large spacecraft. Some techniques have been proposed to design optimal formation trajectories minimizing fuel consumption in the process of formation configuration or reconfiguration. In this study, a method is introduced to build fuel-optimal trajectories minimizing a cost function that combines the total fuel consumption of all satellites and assignment of fuel consumption rate for each satellite. This approach is based on collocation and nonlinear programming to solve constraints for collision avoidance and the final configuration. New constraints of nonlinear equality or inequality are derived for final configuration, and nonlinear inequality constraints are established for collision avoidance. The final configuration constraints are that three or more satellites should form a projected circular orbit and make an equilateral polygon in the horizontal plane. Example scenarios, including these constraints and the cost function, are simulated by the method to generate optimal trajectories for the formation configuration and reconfiguration of multiple satellites.

  15. Vibration suppression in cutting tools using collocated piezoelectric sensors/actuators with an adaptive control algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Radecki, Peter P [Los Alamos National Laboratory; Farinholt, Kevin M [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Bement, Matthew T [Los Alamos National Laboratory

    2008-01-01

    The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.

  16. B-Spline Approximations of the Gaussian, their Gabor Frame Properties, and Approximately Dual Frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2017-01-01

    We prove that Gabor systems generated by certain scaled B-splines can be considered as perturbations of the Gabor systems generated by the Gaussian, with a deviation within an arbitrary small tolerance whenever the order N of the B-spline is sufficiently large. As a consequence we show that for a...

  17. Adaptive estimation of multivariate functions using conditionally Gaussian tensor-product spline priors

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2012-01-01

    We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensor-product B-spline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the B-spline

  18. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    Science.gov (United States)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  19. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  20. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    Science.gov (United States)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  1. B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Shilpa Dilipkumar

    2015-03-01

    Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.

  2. Impact of WhatsApp on Learning and Retention of Collocation Knowledge among Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Zahra Ashiyan

    2016-10-01

    Full Text Available During the recent technological years, language learning has been attempted to transform its path from the conventional methods to instrumental applications. Mobile phone provides people to reach and exchange information through chats (WhatsApp. It is a tool or mode that means the facilities are used for main purposes. The unique features of the application are its compatibility to exchange information, enhance communication and relationship. A mobile phone provides to download, upload and store learning materials and information files. The purpose of the current study was to investigate the use and effect of mobile applications such as WhatsApp on school work and out of school work. In this way, Oxford Placement Test (OPT was conducted among 80 learners in order to select intermediate EFL learners.  In total, 60 participants whose scores were 70 or higher were elected as the intermediate level and were divided into experimental and control groups. In order to control the reliability of the collocation pretest, the test was pilot studied on 15 learners. Then, the pretest was conducted to measure the learner’s collocation knowledge in both of the groups. The experimental group frequently installed WhatsApp application in order to learning and practicing new collocations in order to learning and practicing new collocations, while the control group did not use any tool for learning them. An immediate posttest after the treatment was administered. The results in each group were statistically evaluated and the findings manifested that the experimental group who used WhatsApp application in learning collocation significantly outperformed the control group in posttest. Thus usage of WhatsApp application to acquire collocations can reinforce and enhance the process of collocations acquisition and it can guarantee retention of collocations. This study also prepares pedagogical implications for utilizing mobile application as an influential instrument

  3. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    Science.gov (United States)

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  4. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    Science.gov (United States)

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  5. Synergistic multi-sensor and multi-frequency retrieval of cloud ice water path constrained by CloudSat collocations

    International Nuclear Information System (INIS)

    Islam, Tanvir; Srivastava, Prashant K.

    2015-01-01

    The cloud ice water path (IWP) is one of the major parameters that have a strong influence on earth's radiation budget. Onboard satellite sensors are recognized as valuable tools to measure the IWP in a global scale. Albeit, active sensors such as the Cloud Profiling Radar (CPR) onboard the CloudSat satellite has better capability to measure the ice water content profile, thus, its vertical integral, IWP, than any passive microwave (MW) or infrared (IR) sensors. In this study, we investigate the retrieval of IWP from MW and IR sensors, including AMSU-A, MHS, and HIRS instruments on-board the N19 satellite, such that the retrieval is consistent with the CloudSat IWP estimates. This is achieved through the collocations between the passive satellite measurements and CloudSat scenes. Potential benefit of synergistic multi-sensor multi-frequency retrieval is investigated. Two modeling approaches are explored for the IWP retrieval – generalized linear model (GLM) and neural network (NN). The investigation has been carried out over both ocean and land surface types. The MW/IR synergy is found to be retrieved more accurate IWP than the individual AMSU-A, MHS, or HIRS measurements. Both GLM and NN approaches have been able to exploit the synergistic retrievals. - Highlights: • MW/IR synergy is investigated for IWP retrieval. • The IWP retrieval is modeled using CloudSat collocations. • Two modeling approaches are explored – GLM and ANN. • MW/IR synergy performs better than the MW or IR only retrieval

  6. Spline-based automatic path generation of welding robot

    Institute of Scientific and Technical Information of China (English)

    Niu Xuejuan; Li Liangyu

    2007-01-01

    This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.

  7. From cardinal spline wavelet bases to highly coherent dictionaries

    International Nuclear Information System (INIS)

    Andrle, Miroslav; Rebollo-Neira, Laura

    2008-01-01

    Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)

  8. Splines employment for inverse problem of nonstationary thermal conduction

    International Nuclear Information System (INIS)

    Nikonov, S.P.; Spolitak, S.I.

    1985-01-01

    An analytical solution has been obtained for an inverse problem of nonstationary thermal conduction which is faced in nonstationary heat transfer data processing when the rewetting in channels with uniform annular fuel element imitators is investigated. In solving the problem both boundary conditions and power density within the imitator are regularized via cubic splines constructed with the use of Reinsch algorithm. The solution can be applied for calculation of temperature distribution in the imitator and the heat flux in two-dimensional approximation (r-z geometry) under the condition that the rewetting front velocity is known, and in one-dimensional r-approximation in cases with negligible axial transport or when there is a lack of data about the temperature disturbance source velocity along the channel

  9. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    Science.gov (United States)

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  10. Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Budi I. Setiawan

    2007-09-01

    Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.

  11. Extending Binary Collocations: (Lexicographical Implications of Going beyond the Prototypical a – b

    Directory of Open Access Journals (Sweden)

    Dušan Gabrovšek

    2014-05-01

    Full Text Available The paper focuses primarily on the Sinclairian concept of extended units of meaning in general and on extended collocations in particular, investigating their nature and types. Such extended units are extremely varied and diverse; they are regarded as instances of the functioning of the coselection principle. Some extended forms are used far more commonly that the corresponding prototypical (binary sequences. The final section delves into the ABCs of extended collocations in the context of lexicography, suggesting that dictionaries should make an effort to include a selection of such strings, especially for encoding tasks that are to be shown as examples of use. Most dictionaries incorporate very few such “loose” units, probably because of a powerful tradition to include as examples of use chiefly binary collocations and full sentences.

  12. THE CASE FOR VERB-ADJECTIVE COLLOCATIONS: CORPUS-BASED ANALYSIS AND LEXICOGRAPHICAL TREATMENT

    Directory of Open Access Journals (Sweden)

    Moisés Almela

    2011-10-01

    Full Text Available This article explores a type of co-occurrence pattern which cannot be adequately described by existing models of collocation, and for which combinatory dictionaries have yet failed to provide sufficient information. The phenomenon of “oblique inter-collocation”, as I propose to call it, is characterised by a concatenation of syntagmatic preferences which partially contravenes the habitual grammatical order of semantic selection. In particular, I will examine some of the effects which the verb cause exerts on the distribution of attributive adjectives in the context of specific noun classes. The procedure for detecting and describing patterns of oblique inter-collocation is illustrated by means of SketchEngine corpus query tools. Based on the data extracted from a large-scale corpus, this paper carries out a critical analysis of the micro-structure in Oxford Collocations Dictionary.

  13. Numerical simulation for fractional order stationary neutron transport equation using Haar wavelet collocation method

    Energy Technology Data Exchange (ETDEWEB)

    Saha Ray, S., E-mail: santanusaharay@yahoo.com; Patra, A.

    2014-10-15

    Highlights: • A stationary transport equation has been solved using the technique of Haar wavelet collocation method. • This paper intends to provide the great utility of Haar wavelets to nuclear science problem. • In the present paper, two-dimensional Haar wavelets are applied. • The proposed method is mathematically very simple, easy and fast. - Abstract: In this paper the numerical solution for the fractional order stationary neutron transport equation is presented using Haar wavelet Collocation Method (HWCM). Haar wavelet collocation method is efficient and powerful in solving wide class of linear and nonlinear differential equations. This paper intends to provide an application of Haar wavelets to nuclear science problems. This paper describes the application of Haar wavelets for the numerical solution of fractional order stationary neutron transport equation in homogeneous medium with isotropic scattering. The proposed method is mathematically very simple, easy and fast. To demonstrate about the efficiency and applicability of the method, two test problems are discussed.

  14. Discretisation of the non-linear heat transfer equation for food freezing processes using orthogonal collocation on finite elements

    Directory of Open Access Journals (Sweden)

    E. D. Resende

    2007-09-01

    Full Text Available The freezing process is considered as a propagation problem and mathematically classified as an "initial value problem." The mathematical formulation involves a complex situation of heat transfer with simultaneous changes of phase and abrupt variation in thermal properties. The objective of the present work is to solve the non-linear heat transfer equation for food freezing processes using orthogonal collocation on finite elements. This technique has not yet been applied to freezing processes and represents an alternative numerical approach in this area. The results obtained confirmed the good capability of the numerical method, which allows the simulation of the freezing process in approximately one minute of computer time, qualifying its application in a mathematical optimising procedure. The influence of the latent heat released during the crystallisation phenomena was identified by the significant increase in heat load in the early stages of the freezing process.

  15. Joint surface modeling with thin-plate splines.

    Science.gov (United States)

    Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D

    1999-10-01

    Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.

  16. Collocation methods for uncertainty quanti cation in PDE models with random data

    KAUST Repository

    Nobile, Fabio

    2014-01-06

    In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be over-constraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.

  17. Solutions of First-Order Volterra Type Linear Integrodifferential Equations by Collocation Method

    Directory of Open Access Journals (Sweden)

    Olumuyiwa A. Agbolade

    2017-01-01

    Full Text Available The numerical solutions of linear integrodifferential equations of Volterra type have been considered. Power series is used as the basis polynomial to approximate the solution of the problem. Furthermore, standard and Chebyshev-Gauss-Lobatto collocation points were, respectively, chosen to collocate the approximate solution. Numerical experiments are performed on some sample problems already solved by homotopy analysis method and finite difference methods. Comparison of the absolute error is obtained from the present method and those from aforementioned methods. It is also observed that the absolute errors obtained are very low establishing convergence and computational efficiency.

  18. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    Science.gov (United States)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  19. Collocation methods for the solution of eigenvalue problems for singular ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Winfried Auzinger

    2006-01-01

    Full Text Available We demonstrate that eigenvalue problems for ordinary differential equations can be recast in a formulation suitable for the solution by polynomial collocation. It is shown that the well-posedness of the two formulations is equivalent in the regular as well as in the singular case. Thus, a collocation code equipped with asymptotically correct error estimation and adaptive mesh selection can be successfully applied to compute the eigenvalues and eigenfunctions efficiently and with reliable control of the accuracy. Numerical examples illustrate this claim.

  20. Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System

    Directory of Open Access Journals (Sweden)

    Guang Zhao

    2008-01-01

    Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.

  1. Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering

    Directory of Open Access Journals (Sweden)

    Hyungjun Park

    2015-01-01

    Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.

  2. Local and Global Path Generation for Autonomous Vehicles Using SplinesGeneración Local y Global de Trayectorias para Vehículo Autónomos Usando Splines

    Directory of Open Access Journals (Sweden)

    Randerson Lemos

    2016-05-01

    Full Text Available Abstract Context: Before autonomous vehicles being a reality in daily situations, outstanding issues regarding the techniques of autonomous mobility must be solved. Hence, relevant aspects of a path planning for terrestrial vehicles are shown. Method: The approached path planning technique uses splines to generate the global route. For this goal, waypoints obtained from online map services are used. With the global route parametrized in the arc-length, candidate local paths are computed and the optimal one is selected by cost functions. Results: Different routes are used to show that the number and distribution of waypoints are highly correlated to a satisfactory arc-length parameterization of the global route, which is essential to the proper behavior of the path planning technique. Conclusions: The cubic splines approach to the path planning problem successfully generates the global and local paths. Nevertheless, the use of raw data from the online map services showed to be unfeasible due the consistency of the data. Hence, a preprocessing stage of the raw data is proposed to guarantee the well behavior and robustness of the technique.

  3. Development of quadrilateral spline thin plate elements using the B-net method

    Science.gov (United States)

    Chen, Juan; Li, Chong-Jun

    2013-08-01

    The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.

  4. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    Science.gov (United States)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  5. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  6. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    Science.gov (United States)

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  7. Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data, and naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods, i.e., we use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDE in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Matérn model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis. © 2016 SFoCM

  8. Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-08-26

    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data, and naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods, i.e., we use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDE in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Matérn model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis. © 2016 SFoCM

  9. The Application of the Probabilistic Collocation Method to a Transonic Axial Flow Compressor

    NARCIS (Netherlands)

    Loeven, G.J.A.; Bijl, H.

    2010-01-01

    In this paper the Probabilistic Collocation method is used for uncertainty quantification of operational uncertainties in a transonic axial flow compressor (i.e. NASA Rotor 37). Compressor rotors are components of a gas turbine that are highly sensitive to operational and geometrical uncertainties.

  10. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    Science.gov (United States)

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  11. Strategies in Translating Collocations in Religious Texts from Arabic into English

    Science.gov (United States)

    Dweik, Bader S.; Shakra, Mariam M. Abu

    2010-01-01

    The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…

  12. Implementation of optimal Galerkin and Collocation approximations of PDEs with Random Coefficients

    KAUST Repository

    Beck, Joakim; Nobile, F.; Tamellini, L.; Tempone, Raul

    2011-01-01

    We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new effective class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids.

  13. Parallel algorithm of trigonometric collocation method in nonlinear dynamics of rotors

    Czech Academy of Sciences Publication Activity Database

    Musil, Tomáš; Jakl, Ondřej

    2007-01-01

    Roč. 1, č. 2 (2007), s. 555-564 ISSN 1802-680X. [Výpočtová mechanika 2007. Hrad Nečtiny, 05.11.2007-07.11.2007] Institutional research plan: CEZ:AV0Z20760514; CEZ:AV0Z30860518 Keywords : rotor system * trigonometric collocation * parallel computation Subject RIV: JR - Other Machinery

  14. A Comparison of the Performance Improvement by Collocated and Noncollocated Active Damping in Motion Systems

    NARCIS (Netherlands)

    Babakhani, B.; de Vries, Theodorus J.A.; van Amerongen, J.

    2012-01-01

    In this paper, both collocated and noncollocated active vibration control (AVC) of a the vibrations in a motion system are considered. Pole-zero plots of both the AVC loop and the motion-control (MC) loop are used to analyze the effect of the applied active damping on the system dynamics. Using

  15. Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael; Calo, Victor M.

    2015-01-01

    We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention

  16. SPLINE-FUNCTIONS IN THE TASK OF THE FLOW AIRFOIL PROFILE

    Directory of Open Access Journals (Sweden)

    Mikhail Lopatjuk

    2013-12-01

    Full Text Available The method and the algorithm of solving the problem of streamlining are presented. Neumann boundary problem is reduced to the solution of integral equations with given boundary conditions using the cubic spline-functions

  17. Modeling the dispersion of atmospheric pollution using cubic splines and chapeau functions

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W; Kern, C D; Long, P E

    1979-01-01

    Two methods that can be used to solve complex, three-dimensional, advection-diffusion transport equations are investigated. A quasi-Lagrangian cubic spline method and a chapeau function method are compared in advecting a passive scalar. The methods are simple to use, computationally fast, and reasonably accurate. Little numerical dissipation is manifested by the schemes. In simple advection tests with equal mesh spacing, the chapeau function method maintains slightly more accurate peak values than the cubic spline method. In tests with unequal mesh spacing, the cubic spline method has less noise, but slightly more damping than the standard chapeau method has. Both cubic splines and chapeau functions can be used to solve the three-dimensional problem of gaseous emissions dispersion without excessive programing complexity or storage requirements. (10 diagrams, 39 references, 2 tables)

  18. Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report

    Science.gov (United States)

    1978-01-01

    Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.

  19. Numerical solution of system of boundary value problems using B-spline with free parameter

    Science.gov (United States)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  20. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  1. Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2017-06-01

    Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The

  2. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms; validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, T. E.; O'Dell, C. W.; Frankenberg, C.; Partain, P.; Cronk, H. Q.; Savtchenko, A.; Nelson, R. R.; Rosenthal, E. J.; Chang, A. Y.; Fisher, B.; Osterman, G.; Pollock, R. H.; Crisp, D.; Eldering, A.; Gunson, M. R.

    2015-12-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols within the instrument's field of view (FOV). Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 μm O2 A-band, neglecting scattering by clouds and aerosols, which introduce photon path-length (PPL) differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 μm (weak CO2 band) and 2.06 μm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which key off of different features in the spectra, provides the basis for cloud screening of the OCO-2 data set. To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning to allow throughputs of ≃ 30 %, agreement between the OCO-2 and MODIS cloud screening methods is found to be

  3. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms: validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip T.; Cronk, Heather Q.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert Y.; Fisher, Brenden; Osterman, Gregory B.; Pollock, Randy H.; Crisp, David; Eldering, Annmarie; Gunson, Michael R.

    2016-03-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols, i.e., contamination, within the instrument's field of view. Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 µm O2 A band, neglecting scattering by clouds and aerosols, which introduce photon path-length differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 µm (weak CO2 band) and 2.06 µm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which are sensitive to different features in the spectra, provides the basis for cloud screening of the OCO-2 data set.To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning of algorithmic threshold parameters that allows for processing of ≃ 20-25 % of all OCO-2 soundings

  4. A smoothing spline that approximates Laplace transform functions only known on measurements on the real axis

    International Nuclear Information System (INIS)

    D’Amore, L; Campagna, R; Murli, A; Galletti, A; Marcellino, L

    2012-01-01

    The scientific and application-oriented interest in the Laplace transform and its inversion is testified by more than 1000 publications in the last century. Most of the inversion algorithms available in the literature assume that the Laplace transform function is available everywhere. Unfortunately, such an assumption is not fulfilled in the applications of the Laplace transform. Very often, one only has a finite set of data and one wants to recover an estimate of the inverse Laplace function from that. We propose a fitting model of data. More precisely, given a finite set of measurements on the real axis, arising from an unknown Laplace transform function, we construct a dth degree generalized polynomial smoothing spline, where d = 2m − 1, such that internally to the data interval it is a dth degree polynomial complete smoothing spline minimizing a regularization functional, and outside the data interval, it mimics the Laplace transform asymptotic behavior, i.e. it is a rational or an exponential function (the end behavior model), and at the boundaries of the data set it joins with regularity up to order m − 1, with the end behavior model. We analyze in detail the generalized polynomial smoothing spline of degree d = 3. This choice was motivated by the (ill)conditioning of the numerical computation which strongly depends on the degree of the complete spline. We prove existence and uniqueness of this spline. We derive the approximation error and give a priori and computable bounds of it on the whole real axis. In such a way, the generalized polynomial smoothing spline may be used in any real inversion algorithm to compute an approximation of the inverse Laplace function. Experimental results concerning Laplace transform approximation, numerical inversion of the generalized polynomial smoothing spline and comparisons with the exponential smoothing spline conclude the work. (paper)

  5. On the accurate fast evaluation of finite Fourier integrals using cubic splines

    International Nuclear Information System (INIS)

    Morishima, N.

    1993-01-01

    Finite Fourier integrals based on a cubic-splines fit to equidistant data are shown to be evaluated fast and accurately. Good performance, especially on computational speed, is achieved by the optimization of the spline fit and the internal use of the fast Fourier transform (FFT) algorithm for complex data. The present procedure provides high accuracy with much shorter CPU time than a trapezoidal FFT. (author)

  6. Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline

    Directory of Open Access Journals (Sweden)

    Ravi Kanth A.S.V.

    2016-01-01

    Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.

  7. Micropolar Fluids Using B-spline Divergence Conforming Spaces

    KAUST Repository

    Sarmiento, Adel

    2014-06-06

    We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.

  8. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    Science.gov (United States)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  9. Final report on Production Test No. 105-245-P -- Effectiveness of cadmium coated splines

    Energy Technology Data Exchange (ETDEWEB)

    Carson, A.B.

    1949-05-19

    This report discussed cadmium coated splines which have been developed to supplement the regular control rod systems under emergency shutdown conditions from higher power levels. The objective of this test was to determine the effectiveness of one such spline placed in a tube in the central zone of a pile, and of two splines in the same tube. In addition, the process control group of the P Division asked that probable spline requirements for safe operation at various power levels be estimated, and the details included in this report. The results of the test indicated a reactivity value of 10.5 {plus_minus} 1.0 ih for a single spline, and 19.0 ih {plus_minus} 1.0 ihfor two splines in tube 1674-B under the loading conditions of 4-27-49, the date of the test. The temperature rise of the cooling water for this tube under these conditions was found to be 37.2{degrees}C for 275 MW operation.

  10. Language Proficiency, Collocational Knowledge and the Role of L1 Transfer: A Correlational Study of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Mustapha Hajebi

    2017-12-01

    Full Text Available The present study investigates the correlation between language proficiency, collocations and the role of L1 transfer with collocations. This is a quantitative research. The research places more emphases on collecting data in the form of numbers. It is also experimental research in the sense that it tests participants to measure their variables. The participants of the study were 57 Persian B.A students, both male and female from Islamic Azad University of Bandar Abbas, Iran. The results showed that there is a significant relationship between Iranian subjects’ language proficiency, as measured by the Michigan proficiency test and their knowledge of collocations, as measured by their performance on a collocation test designed for the current study. The results obtained from the research indicate that Iranian EFL learners are more likely to use the right collocation in cases of L1 transfer. This suggests that positive transfer plays a major role when it comes to EFL learners’ ability to produce the right collocations in their L2. The findings of this study have some implications for language teaching. Teachers can put emphasis on the inclusion of selected grammatical and lexical collocations in reading comprehension passages.

  11. Dynamic metabolic flux analysis using B-splines to study the effects of temperature shift on CHO cell metabolism

    Directory of Open Access Journals (Sweden)

    Verónica S. Martínez

    2015-12-01

    Full Text Available Metabolic flux analysis (MFA is widely used to estimate intracellular fluxes. Conventional MFA, however, is limited to continuous cultures and the mid-exponential growth phase of batch cultures. Dynamic MFA (DMFA has emerged to characterize time-resolved metabolic fluxes for the entire culture period. Here, the linear DMFA approach was extended using B-spline fitting (B-DMFA to estimate mass balanced fluxes. Smoother fits were achieved using reduced number of knots and parameters. Additionally, computation time was greatly reduced using a new heuristic algorithm for knot placement. B-DMFA revealed that Chinese hamster ovary cells shifted from 37 °C to 32 °C maintained a constant IgG volume-specific productivity, whereas the productivity for the controls peaked during mid-exponential growth phase and declined afterward. The observed 42% increase in product titer at 32 °C was explained by a prolonged cell growth with high cell viability, a larger cell volume and a more stable volume-specific productivity. Keywords: Dynamic, Metabolism, Flux analysis, CHO cells, Temperature shift, B-spline curve fitting

  12. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    Science.gov (United States)

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  13. Cortical surface registration using spherical thin-plate spline with sulcal lines and mean curvature as features.

    Science.gov (United States)

    Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min

    2012-04-30

    Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    Science.gov (United States)

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  15. Surface quality monitoring for process control by on-line vibration analysis using an adaptive spline wavelet algorithm

    Science.gov (United States)

    Luo, G. Y.; Osypiw, D.; Irle, M.

    2003-05-01

    The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.

  16. A Bézier-Spline-based Model for the Simulation of Hysteresis in Variably Saturated Soil

    Science.gov (United States)

    Cremer, Clemens; Peche, Aaron; Thiele, Luisa-Bianca; Graf, Thomas; Neuweiler, Insa

    2017-04-01

    Most transient variably saturated flow models neglect hysteresis in the p_c-S-relationship (Beven, 2012). Such models tend to inadequately represent matrix potential and saturation distribution. Thereby, when simulating flow and transport processes, fluid and solute fluxes might be overestimated (Russo et al., 1989). In this study, we present a simple, computationally efficient and easily applicable model that enables to adequately describe hysteresis in the p_c-S-relationship for variably saturated flow. This model can be seen as an extension to the existing play-type model (Beliaev and Hassanizadeh, 2001), where scanning curves are simplified as vertical lines between main imbibition and main drainage curve. In our model, we use continuous linear and Bézier-Spline-based functions. We show the successful validation of the model by numerically reproducing a physical experiment by Gillham, Klute and Heermann (1976) describing primary drainage and imbibition in a vertical soil column. With a deviation of 3%, the simple Bézier-Spline-based model performs significantly better that the play-type approach, which deviates by 30% from the experimental results. Finally, we discuss the realization of physical experiments in order to extend the model to secondary scanning curves and in order to determine scanning curve steepness. {Literature} Beven, K.J. (2012). Rainfall-Runoff-Modelling: The Primer. John Wiley and Sons. Russo, D., Jury, W. A., & Butters, G. L. (1989). Numerical analysis of solute transport during transient irrigation: 1. The effect of hysteresis and profile heterogeneity. Water Resources Research, 25(10), 2109-2118. https://doi.org/10.1029/WR025i010p02109. Beliaev, A.Y. & Hassanizadeh, S.M. (2001). A Theoretical Model of Hysteresis and Dynamic Effects in the Capillary Relation for Two-phase Flow in Porous Media. Transport in Porous Media 43: 487. doi:10.1023/A:1010736108256. Gillham, R., Klute, A., & Heermann, D. (1976). Hydraulic properties of a porous

  17. Modelling lecturer performance index of private university in Tulungagung by using survival analysis with multivariate adaptive regression spline

    Science.gov (United States)

    Hasyim, M.; Prastyo, D. D.

    2018-03-01

    Survival analysis performs relationship between independent variables and survival time as dependent variable. In fact, not all survival data can be recorded completely by any reasons. In such situation, the data is called censored data. Moreover, several model for survival analysis requires assumptions. One of the approaches in survival analysis is nonparametric that gives more relax assumption. In this research, the nonparametric approach that is employed is Multivariate Regression Adaptive Spline (MARS). This study is aimed to measure the performance of private university’s lecturer. The survival time in this study is duration needed by lecturer to obtain their professional certificate. The results show that research activities is a significant factor along with developing courses material, good publication in international or national journal, and activities in research collaboration.

  18. Evaluation of the spline reconstruction technique for PET

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Fernández, Yolanda [Centre d’Imatge Molecular Experimental (CIME), CETIR-ERESA, Barcelona 08950 (Spain); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London NW1 2BU (United Kingdom); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA (United Kingdom)

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  19. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    Science.gov (United States)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  20. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    Science.gov (United States)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  1. Spline Truncated Multivariabel pada Permodelan Nilai Ujian Nasional di Kabupaten Lombok Barat

    Directory of Open Access Journals (Sweden)

    Nurul Fitriyani

    2017-12-01

    Full Text Available Regression model is used to analyze the relationship between dependent variable and independent variable. If the regression curve form is not known, then the regression curve estimation can be done by nonparametric regression approach. This study aimed to investigate the relationship between the value resulted by National Examination and the factors that influence it. The statistical analysis used was multivariable truncated spline, in order to analyze the relationship between variables. The research that has been done showed that the best model obtained by using three knot points. This model produced a minimum GCV value of 44.46 and the value of determination coefficient of 58.627%. The parameter test showed that all factors used were significantly influence the National Examination Score for Senior High School students in West Lombok Regency year 2017. The variables were as follows: National Examination Score of Junior High School; School or Madrasah Examination Score; the value of Student’s Report Card; Student’s House Distance to School; and Number of Student’s Siblings.

  2. Underground collocation of nuclear power plant reactors and repository to facilitate the post-renaissance expansion of nuclear power

    Energy Technology Data Exchange (ETDEWEB)

    Myers, Carl W [Los Alamos National Laboratory; Elkins, Ned Z [Los Alamos National Laboratory

    2008-01-01

    Underground collocation of nuclear power reactors and the nuclear waste management facilities supporting those reactors, termed an underground nuclear park (UNP), appears to have several advantages compared to the conventional approach to siting reactors and waste management facilities. These advantages include the potential to lower reactor capital and operating cost, lower nuclear waste management cost, and increase margins of physical security and safety. Envirorunental impacts related to worker health, facility accidents, waste transportation, and sabotage and terrorism appear to be lower for UNPs compared to the current approach. In-place decommissioning ofUNP reactors appears to have cost, safety, envirorunental and waste disposal advantages. The UNP approach has the potential to lead to greater public acceptance for the deployment of new power reactors. Use of the UNP during the post-nuclear renaissance time frame has the potential to enable a greater expansion of U.S. nuclear power generation than might otherwise result. Technical and economic aspects of the UNP concept need more study to determine the viability of the concept.

  3. An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry

    Science.gov (United States)

    2015-12-01

    ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  4. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    International Nuclear Information System (INIS)

    Avila, Gustavo; Carrington, Tucker

    2015-01-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates

  5. Application of Legendre spectral-collocation method to delay differential and stochastic delay differential equation

    Science.gov (United States)

    Khan, Sami Ullah; Ali, Ishtiaq

    2018-03-01

    Explicit solutions to delay differential equation (DDE) and stochastic delay differential equation (SDDE) can rarely be obtained, therefore numerical methods are adopted to solve these DDE and SDDE. While on the other hand due to unstable nature of both DDE and SDDE numerical solutions are also not straight forward and required more attention. In this study, we derive an efficient numerical scheme for DDE and SDDE based on Legendre spectral-collocation method, which proved to be numerical methods that can significantly speed up the computation. The method transforms the given differential equation into a matrix equation by means of Legendre collocation points which correspond to a system of algebraic equations with unknown Legendre coefficients. The efficiency of the proposed method is confirmed by some numerical examples. We found that our numerical technique has a very good agreement with other methods with less computational effort.

  6. On the hybrid stability of the collocated virtual holonomic constraints basedwalking design

    Czech Academy of Sciences Publication Activity Database

    Anderle, Milan; Čelikovský, Sergej

    2017-01-01

    Roč. 6, č. 2 (2017), s. 47-56 ISSN 2223-7038 R&D Projects: GA ČR(CZ) GA17-04682S Institutional support: RVO:67985556 Keywords : Underactuated walking * Virtual holonomic constraints * Poincaré section method * collocated constraints Subject RIV: BC - Control Systems Theory OBOR OECD: Automation and control systems http://lib.physcon.ru/doc?id=60655c1961ed

  7. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    International Nuclear Information System (INIS)

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-01-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  8. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  9. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    Energy Technology Data Exchange (ETDEWEB)

    Verdu, G. [Departamento de Ingenieria Quimica Y Nuclear, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain); Capilla, M.; Talavera, C. F.; Ginestar, D. [Dept. of Nuclear Engineering, Departamento de Matematica Aplicada, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain)

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  10. Fibonacci collocation method with a residual error Function to solve linear Volterra integro differential equations

    Directory of Open Access Journals (Sweden)

    Salih Yalcinbas

    2016-01-01

    Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.

  11. A Legendre Wavelet Spectral Collocation Method for Solving Oscillatory Initial Value Problems

    Directory of Open Access Journals (Sweden)

    A. Karimi Dizicheh

    2013-01-01

    wavelet suitable for large intervals, and then the Legendre-Guass collocation points of the Legendre wavelet are derived. Using this strategy, the iterative spectral method converts the differential equation to a set of algebraic equations. Solving these algebraic equations yields an approximate solution for the differential equation. The proposed method is illustrated by some numerical examples, and the result is compared with the exponentially fitted Runge-Kutta method. Our proposed method is simple and highly accurate.

  12. A Numerical Method for Lane-Emden Equations Using Hybrid Functions and the Collocation Method

    Directory of Open Access Journals (Sweden)

    Changqing Yang

    2012-01-01

    Full Text Available A numerical method to solve Lane-Emden equations as singular initial value problems is presented in this work. This method is based on the replacement of unknown functions through a truncated series of hybrid of block-pulse functions and Chebyshev polynomials. The collocation method transforms the differential equation into a system of algebraic equations. It also has application in a wide area of differential equations. Corresponding numerical examples are presented to demonstrate the accuracy of the proposed method.

  13. Space cutter compensation method for five-axis nonuniform rational basis spline machining

    Directory of Open Access Journals (Sweden)

    Yanyu Ding

    2015-07-01

    Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.

  14. A collocation--Galerkin finite element model of cardiac action potential propagation.

    Science.gov (United States)

    Rogers, J M; McCulloch, A D

    1994-08-01

    A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.

  15. An adaptive multi-element probabilistic collocation method for statistical EMC/EMI characterization

    KAUST Repository

    Yücel, Abdulkadir C.

    2013-12-01

    An adaptive multi-element probabilistic collocation (ME-PC) method for quantifying uncertainties in electromagnetic compatibility and interference phenomena involving electrically large, multi-scale, and complex platforms is presented. The method permits the efficient and accurate statistical characterization of observables (i.e., quantities of interest such as coupled voltages) that potentially vary rapidly and/or are discontinuous in the random variables (i.e., parameters that characterize uncertainty in a system\\'s geometry, configuration, or excitation). The method achieves its efficiency and accuracy by recursively and adaptively dividing the domain of the random variables into subdomains using as a guide the decay rate of relative error in a polynomial chaos expansion of the observables. While constructing local polynomial expansions on each subdomain, a fast integral-equation-based deterministic field-cable-circuit simulator is used to compute the observable values at the collocation/integration points determined by the adaptive ME-PC scheme. The adaptive ME-PC scheme requires far fewer (computationally costly) deterministic simulations than traditional polynomial chaos collocation and Monte Carlo methods for computing averages, standard deviations, and probability density functions of rapidly varying observables. The efficiency and accuracy of the method are demonstrated via its applications to the statistical characterization of voltages in shielded/unshielded microwave amplifiers and magnetic fields induced on car tire pressure sensors. © 2013 IEEE.

  16. Using Small Parallel Corpora to Develop Collocation-Centred Activities in Specialized Translation Classes

    Directory of Open Access Journals (Sweden)

    Postolea Sorina

    2016-12-01

    Full Text Available The research devoted to special languages as well as the activities carried out in specialized translation classes tend to focus primarily on one-word or multi-word terminological units. However, a very important part in the making of specialist registers and texts is played by specialised collocations, i.e. relatively stable word combinations that do not designate concepts but are nevertheless of frequent use in a given field of activity. This is why helping students acquire competences relative to the identification and processing of collocations should become an important objective in specialised translation classes. An easily accessible and dependable resource that may be successfully used to this purpose is represented by corpora and corpus analysis tools, whose usefulness in translator training has been highlighted by numerous studies. This article proposes a series of practical, task-based activities-developed with the help of a small-size parallel corpus of specialised texts-that aim to raise the translation trainees′ awareness of the collocations present in specialised texts and to provide suggestions about their processing in translation.

  17. A finite strain Eulerian formulation for compressible and nearly incompressible hyperelasticity using high-order B-spline finite elements

    KAUST Repository

    Duddu, Ravindra

    2011-10-05

    We present a numerical formulation aimed at modeling the nonlinear response of elastic materials using large deformation continuum mechanics in three dimensions. This finite element formulation is based on the Eulerian description of motion and the transport of the deformation gradient. When modeling a nearly incompressible solid, the transport of the deformation gradient is decomposed into its isochoric part and the Jacobian determinant as independent fields. A homogeneous isotropic hyperelastic solid is assumed and B-splines-based finite elements are used for the spatial discretization. A variational multiscale residual-based approach is employed to stabilize the transport equations. The performance of the scheme is explored for both compressible and nearly incompressible applications. The numerical results are in good agreement with theory illustrating the viability of the computational scheme. © 2011 John Wiley & Sons, Ltd.

  18. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    Science.gov (United States)

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  19. T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM

    Directory of Open Access Journals (Sweden)

    Zhenhua Han

    2017-10-01

    Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.

  20. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    Science.gov (United States)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  1. Splines and their reciprocal-bases in volume-integral equations

    International Nuclear Information System (INIS)

    Sabbagh, H.A.

    1993-01-01

    The authors briefly outline the use of higher-order splines and their reciprocal-bases in discretizing the volume-integral equations of electromagnetics. The discretization is carried out by means of the method of moments, in which the expansion functions are the higher-order splines, and the testing functions are the corresponding reciprocal-basis functions. These functions satisfy an orthogonality condition with respect to the spline expansion functions. Thus, the method is not Galerkin, but the structure of the resulting equations is quite regular, nevertheless. The theory is applied to the volume-integral equations for the unknown current density, or unknown electric field, within a scattering body, and to the equations for eddy-current nondestructive evaluation. Numerical techniques for computing the matrix elements are also given

  2. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    Science.gov (United States)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  3. Integration of association statistics over genomic regions using Bayesian adaptive regression splines

    Directory of Open Access Journals (Sweden)

    Zhang Xiaohua

    2003-11-01

    Full Text Available Abstract In the search for genetic determinants of complex disease, two approaches to association analysis are most often employed, testing single loci or testing a small group of loci jointly via haplotypes for their relationship to disease status. It is still debatable which of these approaches is more favourable, and under what conditions. The former has the advantage of simplicity but suffers severely when alleles at the tested loci are not in linkage disequilibrium (LD with liability alleles; the latter should capture more of the signal encoded in LD, but is far from simple. The complexity of haplotype analysis could be especially troublesome for association scans over large genomic regions, which, in fact, is becoming the standard design. For these reasons, the authors have been evaluating statistical methods that bridge the gap between single-locus and haplotype-based tests. In this article, they present one such method, which uses non-parametric regression techniques embodied by Bayesian adaptive regression splines (BARS. For a set of markers falling within a common genomic region and a corresponding set of single-locus association statistics, the BARS procedure integrates these results into a single test by examining the class of smooth curves consistent with the data. The non-parametric BARS procedure generally finds no signal when no liability allele exists in the tested region (ie it achieves the specified size of the test and it is sensitive enough to pick up signals when a liability allele is present. The BARS procedure provides a robust and potentially powerful alternative to classical tests of association, diminishes the multiple testing problem inherent in those tests and can be applied to a wide range of data types, including genotype frequencies estimated from pooled samples.

  4. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  5. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    Science.gov (United States)

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  6. Spectral collocation method with a flexible angular discretization scheme for radiative transfer in multi-layer graded index medium

    Science.gov (United States)

    Wei, Linyang; Qi, Hong; Sun, Jianping; Ren, Yatao; Ruan, Liming

    2017-05-01

    The spectral collocation method (SCM) is employed to solve the radiative transfer in multi-layer semitransparent medium with graded index. A new flexible angular discretization scheme is employed to discretize the solid angle domain freely to overcome the limit of the number of discrete radiative direction when adopting traditional SN discrete ordinate scheme. Three radial basis function interpolation approaches, named as multi-quadric (MQ), inverse multi-quadric (IMQ) and inverse quadratic (IQ) interpolation, are employed to couple the radiative intensity at the interface between two adjacent layers and numerical experiments show that MQ interpolation has the highest accuracy and best stability. Variable radiative transfer problems in double-layer semitransparent media with different thermophysical properties are investigated and the influence of these thermophysical properties on the radiative transfer procedure in double-layer semitransparent media is also analyzed. All the simulated results show that the present SCM with the new angular discretization scheme can predict the radiative transfer in multi-layer semitransparent medium with graded index efficiently and accurately.

  7. Spatial and temporal interpolation of satellite-based aerosol optical depth measurements over North America using B-splines

    Science.gov (United States)

    Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick

    2005-08-01

    Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.

  8. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    International Nuclear Information System (INIS)

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-01-01

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L 2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods

  9. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  10. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    Science.gov (United States)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  11. Isogeometric finite element data structures based on Bézier extraction of T-splines

    NARCIS (Netherlands)

    Scott, M.A.; Borden, M.J.; Verhoosel, C.V.; Sederberg, T.W.; Hughes, T.J.R.

    2011-01-01

    We develop finite element data structures for T-splines based on Bézier extraction generalizing our previous work for NURBS. As in traditional finite element analysis, the extracted Bézier elements are defined in terms of a fixed set of polynomial basis functions, the so-called Bernstein basis. The

  12. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  13. A thin-plate spline analysis of the face and tongue in obstructive sleep apnea patients.

    Science.gov (United States)

    Pae, E K; Lowe, A A; Fleetham, J A

    1997-12-01

    The shape characteristics of the face and tongue in obstructive sleep apnea (OSA) patients were investigated using thin-plate (TP) splines. A relatively new analytic tool, the TP spline method, provides a means of size normalization and image analysis. When shape is one's main concern, various sizes of a biologic structure may be a source of statistical noise. More seriously, the strong size effect could mask underlying, actual attributes of the disease. A set of size normalized data in the form of coordinates was generated from cephalograms of 80 male subjects. The TP spline method envisioned the differences in the shape of the face and tongue between OSA patients and nonapneic subjects and those between the upright and supine body positions. In accordance with OSA severity, the hyoid bone and the submental region positioned inferiorly and the fourth vertebra relocated posteriorly with respect to the mandible. This caused a fanlike configuration of the lower part of the face and neck in the sagittal plane in both upright and supine body positions. TP splines revealed tongue deformations caused by a body position change. Overall, the new morphometric tool adopted here was found to be viable in the analysis of morphologic changes.

  14. Fingerprint Matching by Thin-plate Spline Modelling of Elastic Deformations

    NARCIS (Netherlands)

    Bazen, A.M.; Gerez, Sabih H.

    2003-01-01

    This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching

  15. Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions

    International Nuclear Information System (INIS)

    Zhu Menghua; Liu Lianggang; Qi Dongxu; You Zhong; Xu Aoao

    2009-01-01

    In this paper, the least square fitting method with the cubic B-spline basis functions is derived to reduce the influence of statistical fluctuations in the gamma ray spectra. The derived procedure is simple and automatic. The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation. (authors)

  16. Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities

    Directory of Open Access Journals (Sweden)

    Rakowski Waldemar

    2015-12-01

    Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.

  17. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  18. Evaluation of optimization methods for nonrigid medical image registration using mutual information and B-splines

    NARCIS (Netherlands)

    Klein, S.; Staring, M.; Pluim, J.P.W.

    2007-01-01

    A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic B-splines. The coordinate mapping that relates the two images is found using an iterative optimization procedure.

  19. A Comparative Study of the Use of Collocation in Iranian High School Textbooks and American English File Books

    Directory of Open Access Journals (Sweden)

    Mohsen Shahrokhi

    2014-05-01

    Full Text Available The present study investigates the extent to which lexical and grammatical collocations are used in Iranian high school English textbooks, compared with the American English File books. To achieve the purposes of this study, this study had to be carried out in two phases. In the first phase, the content of the instructional textbooks, that is, American English File book series, Book 2 and Iranian high school English Book 3, were analyzed to find the frequencies and proportions of the collocations used in the textbooks. Since the instructional textbooks used in the two teaching environments (i.e., Iranian high schools and language institutes were not equal with regard to the density of texts, from each textbook just the first 6000 words, content words as well as function words, were considered. Then, the frequencies of the collocations among the first 6000 words in high school English Book 3 and American English File Book 2 were determined.The results of the statistical analyses revealed that the two text book series differ marginally in terms of frequency and type of collocations. Major difference existed between them when it came to lexical collocations in American English File book 2.

  20. Implementation of optimal Galerkin and Collocation approximations of PDEs with Random Coefficients

    KAUST Repository

    Beck, Joakim

    2011-12-22

    In this work we first focus on the Stochastic Galerkin approximation of the solution u of an elliptic stochastic PDE. We rely on sharp estimates for the decay of the coefficients of the spectral expansion of u on orthogonal polynomials to build a sequence of polynomial subspaces that features better convergence properties compared to standard polynomial subspaces such as Total Degree or Tensor Product. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new effective class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids.

  1. Metaphors in terminological collocations in English language and their equivalents in Serbian

    Directory of Open Access Journals (Sweden)

    Orčić Lidija S.

    2017-01-01

    Full Text Available The framework of this paper is the theory of conceptual metaphor where metaphor is the transfer of a more concrete source domain into a more abstract target domain. Metaphor is a fundamental human ability to speak about abstract concepts using specific terms where the meaning of a term is transferred to another, thus achieving semantic extensions. Although it was thought that in terminology polysemantic expressions are not desirable, in recent decades this traditional view has been abandoned. Metaphor is used not only as a linguistic decoration in language, but as a means of argumentation. It may be noted that the metaphor, as a universal phenomenon, is also common in business English discourse. The subject of our interest is to investigate collocations made up of those nouns and adjectives, which, according to the Oxford Business English Dictionary for Learners of English, are most frequently used in this field. The main objective of this work is to identify and analyze the source and target domains in metaphors in English collocations that contain these nouns and adjectives, and detect mechanisms applied in translating into Serbian. We categorised metaphors in collocations into four groups. The first group consists of metaphors in which the source domain is expressed with the living beings: inanimate entities are described as if they were alive. In these examples, the personification is used to explain abstract concepts, forces and processes in order to present them in a more understandable way. The second group consists of metaphors in which animals are the source domain and their behavior and characteristics serve as a starting point. In business discourse people and institutions are described with such metaphors. In the third group we included the metaphors based on objects that users are familiar with in everyday life. The fourth group consists of metaphors in which the source domain are natural phenomena. When translating a metaphor we

  2. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  3. Collocational Networks Supporting Strategic Planning of Brand Communication: Analysis of Quarterly Reports of Telecommunication Companies

    Directory of Open Access Journals (Sweden)

    Pentti Järvi

    2004-10-01

    Full Text Available This study addresses analysing quarterly reports from a brandtheoretical viewpoint. The study addresses the issue through a method which introduces both a quantitative tool based on linguistic theory and qualitative decisions of the researchers. The research objects of this study are two quarterly reports each of three telecommunications companies: Ericsson, Motorola and Nokia. The method used is a collocational network. The analyses show that there are differences in communication and message strategies among investigated companies and also changes during a quite short period in each company

  4. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    Science.gov (United States)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  5. A nodal collocation method for the calculation of the lambda modes of the P L equations

    International Nuclear Information System (INIS)

    Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdu, G.

    2005-01-01

    P L equations are classical approximations to the neutron transport equation admitting a diffusive form. Using this property, a nodal collocation method is developed for the P L approximations, which is based on the expansion of the flux in terms of orthonormal Legendre polynomials. This method approximates the differential lambda modes problem by an algebraic eigenvalue problem from which the fundamental and the subcritical modes of the system can be calculated. To test the performance of this method, two problems have been considered, a homogeneous slab, which admits an analytical solution, and a seven-region slab corresponding to a more realistic problem

  6. A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation

    Science.gov (United States)

    Jones, Brandon A.; Anderson, Rodney L.

    2012-01-01

    Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

  7. Prostate multimodality image registration based on B-splines and quadrature local energy.

    Science.gov (United States)

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  8. Collocated Interaction

    DEFF Research Database (Denmark)

    E. Fischer, Joel; Porcheron, Martin; Lucero, Andrés

    2016-01-01

    interactions. Yet, new challenges abound as people wear and carry more devices than ever, creating fragmented device ecologies at work, and changing the ways we socialise with each other. In this workshop we seek to start a dialogue to look back as well as forward, review best practices, discuss and design......In the 25 years since Ellis, Gibbs, and Rein proposed the time-space taxonomy, research in the ‘same time, same place’ quadrant has diversified, perhaps even fragmented. This one-day workshop will bring together researchers with diverse, yet convergent interests in tabletop, surface, mobile...

  9. A Highly Accurate Regular Domain Collocation Method for Solving Potential Problems in the Irregular Doubly Connected Domains

    Directory of Open Access Journals (Sweden)

    Zhao-Qing Wang

    2014-01-01

    Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.

  10. An h-adaptive stochastic collocation method for stochastic EMC/EMI analysis

    KAUST Repository

    Yücel, Abdulkadir C.

    2010-07-01

    The analysis of electromagnetic compatibility and interference (EMC/EMI) phenomena is often fraught by randomness in a system\\'s excitation (e.g., the amplitude, phase, and location of internal noise sources) or configuration (e.g., the routing of cables, the placement of electronic systems, component specifications, etc.). To bound the probability of system malfunction, fast and accurate techniques to quantify the uncertainty in system observables (e.g., voltages across mission-critical circuit elements) are called for. Recently proposed stochastic frameworks [1-2] combine deterministic electromagnetic (EM) simulators with stochastic collocation (SC) methods that approximate system observables using generalized polynomial chaos expansion (gPC) [3] (viz. orthogonal polynomials spanning the entire random domain) to estimate their statistical moments and probability density functions (pdfs). When constructing gPC expansions, the EM simulator is used solely to evaluate system observables at collocation points prescribed by the SC-gPC scheme. The frameworks in [1-2] therefore are non-intrusive and straightforward to implement. That said, they become inefficient and inaccurate for system observables that vary rapidly or are discontinuous in the random variables (as their representations may require very high-order polynomials). © 2010 IEEE.

  11. Comparison of multiphase mixing simulations performed on a staggered and a collocated grid

    International Nuclear Information System (INIS)

    Leskovar, M.

    2000-01-01

    During a severe reactor accident following core meltdown when the molten fuel comes into contact with the coolant water a steam explosion may occur. The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurrence. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. To analyze the cold premixing experiments the computer code ESE has been developed. The specialty of ESE is that it uses a combined single-multiphase flow model. Because of problems with the convergence of the momentum equation written in conservative form on a staggered grid, the development of a collocated grid version of ESE was planed. But since we obtained the commercial code CFX-4.3, which uses a collocated variable arrangement, we decided first to test the capabilities of CFX-4.3. With ESE and CFX-4.3 the cold premixing experiment Q08 has been simulated. In the paper the simulation results performed with both codes are presented and commented in comparison to experimental data. (author)

  12. On multilevel RBF collocation to solve nonlinear PDEs arising from endogenous stochastic volatility models

    Science.gov (United States)

    Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl

    2018-06-01

    The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.

  13. Entropy Stable Staggered Grid Spectral Collocation for the Burgers' and Compressible Navier-Stokes Equations

    Science.gov (United States)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2015-01-01

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for Burgers' and the compressible Navier-Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [1, 2], extends the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to a combination of tensor product Legendre-Gauss (LG) and LGL points. The new semi-discrete operators discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality for both Burgers' and the compressible Navier-Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly to implement. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinearly stability proof for the compressible Navier-Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  14. Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors

    Directory of Open Access Journals (Sweden)

    Pham Thuy Dung

    2016-12-01

    Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners

  15. Vs30 and spectral response from collocated shallow, active- and passive-source Vs data at 27 sites in Puerto Rico

    Science.gov (United States)

    Odum, Jack K.; Stephenson, William J.; Williams, Robert A.; von Hillebrandt-Andrade, Christa

    2013-01-01

    Shear‐wave velocity (VS) and time‐averaged shear‐wave velocity to 30 m depth (VS30) are the key parameters used in seismic site response modeling and earthquake engineering design. Where VS data are limited, available data are often used to develop and refine map‐based proxy models of VS30 for predicting ground‐motion intensities. In this paper, we present shallow VS data from 27 sites in Puerto Rico. These data were acquired using a multimethod acquisition approach consisting of noninvasive, collocated, active‐source body‐wave (refraction/reflection), active‐source surface wave at nine sites, and passive‐source surface‐wave refraction microtremor (ReMi) techniques. VS‐versus‐depth models are constructed and used to calculate spectral response plots for each site. Factors affecting method reliability are analyzed with respect to site‐specific differences in bedrock VS and spectral response. At many but not all sites, body‐ and surface‐wave methods generally determine similar depths to bedrock, and it is the difference in bedrock VS that influences site amplification. The predicted resonant frequencies for the majority of the sites are observed to be within a relatively narrow bandwidth of 1–3.5 Hz. For a first‐order comparison of peak frequency position, predictive spectral response plots from eight sites are plotted along with seismograph instrument spectra derived from the time series of the 16 May 2010 Puerto Rico earthquake. We show how a multimethod acquisition approach using collocated arrays compliments and corroborates VS results, thus adding confidence that reliable site characterization information has been obtained.

  16. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    Science.gov (United States)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  17. A Modified Generalized Laguerre-Gauss Collocation Method for Fractional Neutral Functional-Differential Equations on the Half-Line

    Directory of Open Access Journals (Sweden)

    Ali H. Bhrawy

    2014-01-01

    Full Text Available The modified generalized Laguerre-Gauss collocation (MGLC method is applied to obtain an approximate solution of fractional neutral functional-differential equations with proportional delays on the half-line. The proposed technique is based on modified generalized Laguerre polynomials and Gauss quadrature integration of such polynomials. The main advantage of the present method is to reduce the solution of fractional neutral functional-differential equations into a system of algebraic equations. Reasonable numerical results are achieved by choosing few modified generalized Laguerre-Gauss collocation points. Numerical results demonstrate the accuracy, efficiency, and versatility of the proposed method on the half-line.

  18. Quintic hyperbolic nonpolynomial spline and finite difference method for nonlinear second order differential equations and its application

    Directory of Open Access Journals (Sweden)

    Navnit Jha

    2014-04-01

    Full Text Available An efficient numerical method based on quintic nonpolynomial spline basis and high order finite difference approximations has been presented. The scheme deals with the space containing hyperbolic and polynomial functions as spline basis. With the help of spline functions we derive consistency conditions and high order discretizations of the differential equation with the significant first order derivative. The error analysis of the new method is discussed briefly. The new method is analyzed for its efficiency using the physical problems. The order and accuracy of the proposed method have been analyzed in terms of maximum errors and root mean square errors.

  19. Synchronous and Asynchronous Multimedia and Iranian EFL Learners’ Learning of Collocations

    Directory of Open Access Journals (Sweden)

    Goudarz Alibakhshi

    2016-07-01

    Full Text Available The use of effective multimedia instructions such as mobiles, computers, and the internet in language learning has turned out to be useful since the last decades. The impact of multimedia and synchronous approaches of computer-assisted language learning (CALL on English as a foreign language (EFL learners' learning of language skills and components has been studied to some extent. However, the impact of computerized mediated instruction through multimedia (text and graphics on learning collocations requires further investigations. This study aimed at investigating whether synchronous and asynchronous multimedia components: text and text with added graphics had any effects on EFL learners' learning of collocations. In doing so, 150 male EFL learners at pre-intermediate proficiency level were selected through convenience sampling. They were divided into six groups. The results of the study showed that computerized mediated instruction was more effective than non-computerized instruction. Also, synchronous computerized instruction was more effective than asynchronous computerized instruction. The results also showed that presentation through text with added graphics was more effective than presentation through simple text. The results are discussed and some pedagogical implications are presented.   Persian Abstract: استفاده ازآموزش های چند رسانه ای مؤثر مانند موبایل، کامپیوتر، و اینترنت در آموزش زبان  از چند دهه قبل مرسوم شده است. تأثیر چند رسانه ای و روش های همزمان یادگیری  زبان به کمک کامپیوتر (CALL  بر  یادگیری مؤلفه ها و مهارتهای زبان انگلیسی به عنوان یک زبان خارجی (EFL توسط زبان آموزان  تا حدی مورد  مطالعه قرار گرفته است. بااین حال، تأثیر آموزش با کامپیوتر از طریق

  20. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy

  1. Gauss-Galerkin quadrature rules for quadratic and cubic spline spaces and their application to isogeometric analysis

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2016-01-01

    We introduce Gaussian quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. By definition, these spaces are of even degrees. The optimal quadrature rules we recently derived

  2. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    KAUST Repository

    Cortes, Adriano Mauricio; Dalcin, Lisandro; Sarmiento, Adel; Collier, N.; Calo, Victor M.

    2016-01-01

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence

  3. Selected Aspects of Wear Affecting Keyed Joints and Spline Connections During Operation of Aircrafts

    Directory of Open Access Journals (Sweden)

    Gębura Andrzej

    2014-12-01

    Full Text Available The paper deals with selected deficiencies of spline connections, such as angular or parallel misalignment (eccentricity and excessive play. It is emphasized how important these deficiencies are for smooth operation of the entire driving units. The aim of the study is to provide a kind of a reference list with such deficiencies with visual symptoms of wear, specification of mechanical measurements for mating surfaces, mathematical description of waveforms for dynamic variability of motion in such connections and visualizations of the connection behaviour acquired with the use of the FAM-C and FDM-A. Attention is paid to hazards to flight safety when excessively worn spline connections are operated for long periods of time

  4. Numerical simulation of reaction-diffusion systems by modified cubic B-spline differential quadrature method

    International Nuclear Information System (INIS)

    Mittal, R.C.; Rohila, Rajni

    2016-01-01

    In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.

  5. [Non-rigid medical image registration based on mutual information and thin-plate spline].

    Science.gov (United States)

    Cao, Guo-gang; Luo, Li-min

    2009-01-01

    To get precise and complete details, the contrast in different images is needed in medical diagnosis and computer assisted treatment. The image registration is the basis of contrast, but the regular rigid registration does not satisfy the clinic requirements. A non-rigid medical image registration method based on mutual information and thin-plate spline was present. Firstly, registering two images globally based on mutual information; secondly, dividing reference image and global-registered image into blocks and registering them; then getting the thin-plate spline transformation according to the shift of blocks' center; finally, applying the transformation to the global-registered image. The results show that the method is more precise than the global rigid registration based on mutual information and it reduces the complexity of getting control points and satisfy the clinic requirements better by getting control points of the thin-plate transformation automatically.

  6. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...... efficiency of Polish dairy farms contributes to the insight into this dynamic process. Furthermore, we compare and evaluate the results of this spline-based semi-parametric stochastic frontier model with results of other semi-parametric stochastic frontier models and of traditional parametric stochastic...

  7. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  8. Effects of Tightening Torque on Dynamic Characteristics of Low Pressure Rotors Connected by a Spline Coupling

    Institute of Scientific and Technical Information of China (English)

    Chen Xi; Liao M ingfu; Li Quankun

    2017-01-01

    A rotor dynamic model is built up for investigating the effects of tightening torque on dynamic character-istics of low pressure rotors connected by a spline coupling .The experimental rotor system is established using a fluted disk and a speed sensor which is applied in an actual aero engine for speed measurement .Through simulating calculation and experiments ,the effects of tightening torque on the dynamic characteristics of the rotor system con-nected by a spline coupling including critical speeds ,vibration modes and unbalance responses are analyzed .The results show that when increasing the tightening torque ,the first two critical speeds and the amplitudes of unbal-ance response gradually increase in varying degrees while the vibration modes are essentially unchanged .In addi-tion ,changing axial and circumferential positions of the mass unbalance can lead to various amplitudes of unbalance response and even the rates of change .

  9. Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set

    Directory of Open Access Journals (Sweden)

    M Barezi

    2011-03-01

    Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.

  10. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  11. Cubic spline numerical solution of an ablation problem with convective backface cooling

    Science.gov (United States)

    Lin, S.; Wang, P.; Kahawita, R.

    1984-08-01

    An implicit numerical technique using cubic splines is presented for solving an ablation problem on a thin wall with convective cooling. A non-uniform computational mesh with 6 grid points has been used for the numerical integration. The method has been found to be computationally efficient, providing for the care under consideration of an overall error of about 1 percent. The results obtained indicate that the convective cooling is an important factor in reducing the ablation thickness.

  12. A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid

    Science.gov (United States)

    Sulaimanov, Z. M.; Shumilov, B. M.

    2017-10-01

    For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.

  13. Discrete quintic spline for boundary value problem in plate deflation theory

    Science.gov (United States)

    Wong, Patricia J. Y.

    2017-07-01

    We propose a numerical scheme for a fourth-order boundary value problem arising from plate deflation theory. The scheme involves a discrete quintic spline, and it is of order 4 if a parameter takes a specific value, else it is of order 2. We also present a well known numerical example to illustrate the efficiency of our method as well as to compare with other numerical methods proposed in the literature.

  14. Nonlinear Multivariate Spline-Based Control Allocation for High-Performance Aircraft

    OpenAIRE

    Tol, H.J.; De Visser, C.C.; Van Kampen, E.; Chu, Q.P.

    2014-01-01

    High performance flight control systems based on the nonlinear dynamic inversion (NDI) principle require highly accurate models of aircraft aerodynamics. In general, the accuracy of the internal model determines to what degree the system nonlinearities can be canceled; the more accurate the model, the better the cancellation, and with that, the higher the performance of the controller. In this paper a new control system is presented that combines NDI with multivariate simplex spline based con...

  15. Mandibular transformations in prepubertal patients following treatment for craniofacial microsomia: thin-plate spline analysis.

    Science.gov (United States)

    Hay, A D; Singh, G D

    2000-01-01

    To analyze correction of mandibular deformity using an inverted L osteotomy and autogenous bone graft in patients exhibiting unilateral craniofacial microsomia (CFM), thin-plate spline analysis was undertaken. Preoperative, early postoperative, and approximately 3.5-year postoperative posteroanterior cephalographs of 15 children (age 10+/-3 years) with CFM were scanned, and eight homologous mandibular landmarks digitized. Average mandibular geometries, scaled to an equivalent size, were generated using Procrustes superimposition. Results indicated that the mean pre- and postoperative mandibular configurations differed statistically (PThin-plate spline analysis indicated that the total spline (Cartesian transformation grid) of the pre- to early postoperative configuration showed mandibular body elongation on the treated side and inferior symphyseal displacement. The affine component of the total spline revealed a clockwise rotation of the preoperative configuration, whereas the nonaffine component was responsible for ramus, body, and symphyseal displacements. The transformation grid for the early and late postoperative comparison showed bilateral ramus elongation. A superior symphyseal displacement contrasted with its earlier inferior displacement, the affine component had translocated the symphyseal landmarks towards the midline. The nonaffine component demonstrated bilateral ramus lengthening, and partial warps suggested that these elongations were slightly greater on the nontreated side. The affine component of the pre- and late postoperative comparison also demonstrated a clockwise rotation. The nonaffine component produced the bilateral ramus elongations-the nontreated side ramus lengthening slightly more than the treated side. It is concluded that an inverted L osteotomy improves mandibular morphology significantly in CFM patients and permits continued bilateral ramus growth. Copyright 2000 Wiley-Liss, Inc.

  16. Thin-plate spline (TPS) graphical analysis of the mandible on cephalometric radiographs.

    Science.gov (United States)

    Chang, H P; Liu, P H; Chang, H F; Chang, C H

    2002-03-01

    We describe two cases of Class III malocclusion with and without orthodontic treatment. A thin-plate spline (TPS) analysis of lateral cephalometric radiographs was used to visualize transformations of the mandible. The actual sites of mandibular skeletal change are not detectable with conventional cephalometric analysis. These case analyses indicate that specific patterns of mandibular transformation are associated with Class III malocclusion with or without orthopaedic therapy, and visualization of these deformations is feasible using TPS graphical analysis.

  17. Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence

    KAUST Repository

    Ait-Haddou, Rachid

    2015-06-19

    We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention of any numerical solver and the rule is optimal, that is, it requires minimal number of nodes. Numerical experiments validating the theoretical results and the error estimates of the quadrature rules are also presented.

  18. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)

    2015-05-15

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  19. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    International Nuclear Information System (INIS)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y.; Pullepu, Babuji

    2015-01-01

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  20. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    Science.gov (United States)

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  1. Optimization and parallelization of B-spline based orbital evaluations in QMC on multi/many-core shared memory processors

    OpenAIRE

    Mathuriya, Amrita; Luo, Ye; Benali, Anouar; Shulenburger, Luke; Kim, Jeongnim

    2016-01-01

    B-spline based orbital representations are widely used in Quantum Monte Carlo (QMC) simulations of solids, historically taking as much as 50% of the total run time. Random accesses to a large four-dimensional array make it challenging to efficiently utilize caches and wide vector units of modern CPUs. We present node-level optimizations of B-spline evaluations on multi/many-core shared memory processors. To increase SIMD efficiency and bandwidth utilization, we first apply data layout transfo...

  2. Vector splines on the sphere with application to the estimation of vorticity and divergence from discrete, noisy data

    Science.gov (United States)

    Wahba, G.

    1982-01-01

    Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.

  3. A consistent method for finite volume discretization of body forces on collocated grids applied to flow through an actuator disk

    DEFF Research Database (Denmark)

    Troldborg, Niels; Sørensen, Niels N.; Réthoré, Pierre-Elouan

    2015-01-01

    This paper describes a consistent algorithm for eliminating the numerical wiggles appearing when solving the finite volume discretized Navier-Stokes equations with discrete body forces in a collocated grid arrangement. The proposed method is a modification of the Rhie-Chow algorithm where the for...

  4. Improvement of neutron kinetics module in TRAC-BF1code: one-dimensional nodal collocation method

    Energy Technology Data Exchange (ETDEWEB)

    Jambrina, Ana; Barrachina, Teresa; Miro, Rafael; Verdu, Gumersindo, E-mail: ajambrina@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es [Universidade Politecnica de Valencia (UPV), Valencia (Spain); Soler, Amparo, E-mail: asoler@iberdrola.es [SEA Propulsion S.L., Madrid (Spain); Concejal, Alberto, E-mail: acbe@iberdrola.es [Iberdrola Ingenieria y Construcion S.A.U., Madrid (Spain)

    2013-07-01

    The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method. (author)

  5. Application of a nodal collocation approximation for the multidimensional PL equations to the 3D Takeda benchmark problems

    International Nuclear Information System (INIS)

    Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdú, G.

    2012-01-01

    Highlights: ► The multidimensional P L approximation to the nuclear transport equation is reviewed. ► A nodal collocation method is developed for the spatial discretization of P L equations. ► Advantages of the method are lower dimension and good characterists of the associated algebraic eigenvalue problem. ► The P L nodal collocation method is implemented into the computer code SHNC. ► The SHNC code is verified with 2D and 3D benchmark eigenvalue problems from Takeda and Ikeda, giving satisfactory results. - Abstract: P L equations are classical approximations to the neutron transport equations, which are obtained expanding the angular neutron flux in terms of spherical harmonics. These approximations are useful to study the behavior of reactor cores with complex fuel assemblies, for the homogenization of nuclear cross-sections, etc., and most of these applications are in three-dimensional (3D) geometries. In this work, we review the multi-dimensional P L equations and describe a nodal collocation method for the spatial discretization of these equations for arbitrary odd order L, which is based on the expansion of the spatial dependence of the fields in terms of orthonormal Legendre polynomials. The performance of the nodal collocation method is studied by means of obtaining the k eff and the stationary power distribution of several 3D benchmark problems. The solutions are obtained are compared with a finite element method and a Monte Carlo method.

  6. Correlation studies for B-spline modeled F2 Chapman parameters obtained from FORMOSAT-3/COSMIC data

    Directory of Open Access Journals (Sweden)

    M. Limberger

    2014-12-01

    Full Text Available The determination of ionospheric key quantities such as the maximum electron density of the F2 layer NmF2, the corresponding F2 peak height hmF2 and the F2 scale height HF2 are of high relevance in 4-D ionosphere modeling to provide information on the vertical structure of the electron density (Ne. The Ne distribution with respect to height can, for instance, be modeled by the commonly accepted F2 Chapman layer. An adequate and observation driven description of the vertical Ne variation can be obtained from electron density profiles (EDPs derived by ionospheric radio occultation measurements between GPS and low Earth orbiter (LEO satellites. For these purposes, the six FORMOSAT-3/COSMIC (F3/C satellites provide an excellent opportunity to collect EDPs that cover most of the ionospheric region, in particular the F2 layer. For the contents of this paper, F3/C EDPs have been exploited to determine NmF2, hmF2 and HF2 within a regional modeling approach. As mathematical base functions, endpoint-interpolating polynomial B-splines are considered to model the key parameters with respect to longitude, latitude and time. The description of deterministic processes and the verification of this modeling approach have been published previously in Limberger et al. (2013, whereas this paper should be considered as an extension dealing with related correlation studies, a topic to which less attention has been paid in the literature. Relations between the B-spline series coefficients regarding specific key parameters as well as dependencies between the three F2 Chapman key parameters are in the main focus. Dependencies are interpreted from the post-derived correlation matrices as a result of (1 a simulated scenario without data gaps by taking dense, homogenously distributed profiles into account and (2 two real data scenarios on 1 July 2008 and 1 July 2012 including sparsely, inhomogeneously distributed F3/C EDPs. Moderate correlations between hmF2 and HF2 as

  7. An adaptive multi-spline refinement algorithm in simulation based sailboat trajectory optimization using onboard multi-core computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2016-06-01

    Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.

  8. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  9. Motion characteristic between die and workpiece in spline rolling process with round dies

    Directory of Open Access Journals (Sweden)

    Da-Wei Zhang

    2016-06-01

    Full Text Available In the spline rolling process with round dies, additional kinematic compensation is an essential mechanism for improving the division of teeth and pitch accuracy as well as surface quality. The motion characteristic between the die and workpiece under varied center distance in the spline rolling process was investigated. Mathematical models of the instantaneous center of rotation, transmission ratio, and centrodes in the rolling process were established. The models were used to analyze the rolling process of the involute spline with circular dedendum, and the results indicated that (1 with the reduction in the center distance, the instantaneous center moves toward workpiece, and the transmission ratio increases at first and then decreases; (2 the variations in the instantaneous center and transmission ratio are discontinuous, presenting an interruption when the involute flank begins to be formed; (3 the change in transmission ratio at the forming stage of the workpiece with the involute flank can be negligible; and (4 the centrode of the workpiece is an Archimedes line whose polar radius reduces, and the centrode of the rolling die is similar to Archimedes line when the workpiece is with the involute flank.

  10. Ethnicity and skeletal Class III morphology: a pubertal growth analysis using thin-plate spline analysis.

    Science.gov (United States)

    Alkhamrah, B; Terada, K; Yamaki, M; Ali, I M; Hanada, K

    2001-01-01

    A longitudinal retrospective study using thin-plate spline analysis was used to investigate skeletal Class III etiology in Japanese female adolescents. Headfilms of 40 subjects were chosen from the archives of the Orthodontic department at Niigata University Dental Hospital, and were traced at IIIB and IVA Hellman dental ages. Twenty-eight homologous landmarks, representing hard and soft tissue, were digitized. These were used to reproduce a consensus for the profilogram, craniomaxillary complex, mandible, and soft tissue for each age and skeletal group. Generalized least-square analysis revealed a significant shape difference between age-matched groups (P spline and partial warps (PW)3 and 2 showed a maxillary retrusion at stage IIIB opposite an acute cranial base at stage IVA. Mandibular total spline and PW4, 5 showed changes affecting most landmarks and their spatial interrelationship, especially a stretch along the articulare-pogonion axis. In soft tissue analysis, PW8 showed large and local changes which paralleled the underlying hard tissue components. Allometry of the mandible and anisotropy of the cranial base, the maxilla, and the mandible asserted the complexity of craniofacial growth and the difficulty of predicting its outcome.

  11. Study on signal processing in Eddy current testing for defects in spline gear

    International Nuclear Information System (INIS)

    Lee, Jae Ho; Park, Tae Sug; Park, Ik Keun

    2016-01-01

    Eddy current testing (ECT) is commonly applied for the inspection of automated production lines of metallic products, because it has a high inspection speed and a reasonable price. When ECT is applied for the inspection of a metallic object having an uneven target surface, such as the spline gear of a spline shaft, it is difficult to distinguish between the original signal obtained from the sensor and the signal generated by a defect because of the relatively large surface signals having similar frequency distributions. To facilitate the detection of defect signals from the spline gear, implementation of high-order filters is essential, so that the fault signals can be distinguished from the surrounding noise signals, and simultaneously, the pass-band of the filter can be adjusted according to the status of each production line and the object to be inspected. We will examine the infinite impulse filters (IIR filters) available for implementing an advanced filter for ECT, and attempt to detect the flaw signals through optimization of system design parameters for detecting the signals at the system level

  12. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  13. Effects of early activator treatment in patients with class II malocclusion evaluated by thin-plate spline analysis.

    Science.gov (United States)

    Lux, C J; Rübel, J; Starke, J; Conradt, C; Stellzig, P A; Komposch, P G

    2001-04-01

    The aim of the present longitudinal cephalometric study was to evaluate the dentofacial shape changes induced by activator treatment between 9.5 and 11.5 years in male Class II patients. For a rigorous morphometric analysis, a thin-plate spline analysis was performed to assess and visualize dental and skeletal craniofacial changes. Twenty male patients with a skeletal Class II malrelationship and increased overjet who had been treated at the University of Heidelberg with a modified Andresen-Häupl-type activator were compared with a control group of 15 untreated male subjects of the Belfast Growth Study. The shape changes for each group were visualized on thin-plate splines with one spline comprising all 13 landmarks to show all the craniofacial shape changes, including skeletal and dento-alveolar reactions, and a second spline based on 7 landmarks to visualize only the skeletal changes. In the activator group, the grid deformation of the total spline pointed to a strong activator-induced reduction of the overjet that was caused both by a tipping of the incisors and by a moderation of sagittal discrepancies, particularly a slight advancement of the mandible. In contrast with this, in the control group, only slight localized shape changes could be detected. Both in the 7- and 13-landmark configurations, the shape changes between the groups differed significantly at P thin-plate spline analysis turned out to be a useful morphometric supplement to conventional cephalometrics because the complex patterns of shape change could be suggestively visualized.

  14. FRBRization of a Library Catalog: Better Collocation of Records, Leading to Enhanced Search, Retrieval, and Display

    Directory of Open Access Journals (Sweden)

    Timothy J. Dickey

    2008-03-01

    Full Text Available The Functional Requirements for Bibliographic Records (FRBR’s hierarchical system defines families of bibliographic relationship between records and collocates them better than most extant bibliographic systems. Certain library materials (especially audio-visual formats pose notable challenges to search and retrieval; the first benefits of a FRBRized system would be felt in music libraries, but research already has proven its advantages for fine arts, theology, and literature—the bulk of the non-science, technology, and mathematics collections. This report will summarize the benefits of FRBR to nextgeneration library catalogs and OPACs, and will review the handful of ILS and catalog systems currently operating with its theoretical structure.

  15. Analytic regularity and collocation approximation for elliptic PDEs with random domain deformations

    KAUST Repository

    Castrillon, Julio

    2016-03-02

    In this work we consider the problem of approximating the statistics of a given Quantity of Interest (QoI) that depends on the solution of a linear elliptic PDE defined over a random domain parameterized by N random variables. The elliptic problem is remapped onto a corresponding PDE with a fixed deterministic domain. We show that the solution can be analytically extended to a well defined region in CN with respect to the random variables. A sparse grid stochastic collocation method is then used to compute the mean and variance of the QoI. Finally, convergence rates for the mean and variance of the QoI are derived and compared to those obtained in numerical experiments.

  16. Coefficient of restitution in fractional viscoelastic compliant impacts using fractional Chebyshev collocation

    Science.gov (United States)

    Dabiri, Arman; Butcher, Eric A.; Nazari, Morad

    2017-02-01

    Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.

  17. Finite Volume Methods for Incompressible Navier-Stokes Equations on Collocated Grids with Nonconformal Interfaces

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry

    turbine computations, collocated grid-based SIMPLE-like algorithms are developed for computations on block-structured grids with nonconformal interfaces. A technique to enhance both the convergence speed and the solution accuracy of the SIMPLE-like algorithms is presented. The erroneous behavior, which...... versions of the SIMPLE algorithm. The new technique is implemented in an existing conservative 2nd order finite-volume scheme flow solver (EllipSys), which is extended to cope with grids with nonconformal interfaces. The behavior of the discrete Navier-Stokes equations is discussed in detail...... Block LU relaxation scheme is shown to possess several optimal conditions, which enables to preserve high efficiency of the multigrid solver on both conformal and nonconformal grids. The developments are done using a parallel MPI algorithm, which can handle multiple numbers of interfaces with multiple...

  18. Block preconditioners for linear systems arising from multiscale collocation with compactly supported RBFs

    KAUST Repository

    Farrell, Patricio

    2015-04-30

    © 2015John Wiley & Sons, Ltd. Symmetric collocation methods with RBFs allow approximation of the solution of a partial differential equation, even if the right-hand side is only known at scattered data points, without needing to generate a grid. However, the benefit of a guaranteed symmetric positive definite block system comes at a high computational cost. This cost can be alleviated somewhat by considering compactly supported RBFs and a multiscale technique. But the condition number and sparsity will still deteriorate with the number of data points. Therefore, we study certain block diagonal and triangular preconditioners. We investigate ideal preconditioners and determine the spectra of the preconditioned matrices before proposing more practical preconditioners based on a restricted additive Schwarz method with coarse grid correction. Numerical results verify the effectiveness of the preconditioners.

  19. An embedded formula of the Chebyshev collocation method for stiff problems

    Science.gov (United States)

    Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu

    2017-12-01

    In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.

  20. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2014-01-01

    Full Text Available This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs. The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  1. Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals

    Science.gov (United States)

    Deimert, C.; Potter, M. E.; Okoniewski, M.

    2016-12-01

    The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.

  2. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    Science.gov (United States)

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  3. Between initial familiarity and future use – a case of Collocated Collaborative Writing

    DEFF Research Database (Denmark)

    Bødker, Susanne; Polli, Anna Maria

    2014-01-01

    these with the three above forms of practice. The initial familiarity leads to two different early practices that get in the way of each other, and the collaborative writing idea. They point instead towards a discursive sharing of individual feelings, a different kind of past experiences than anticipated in design.......This paper reports on a design experiment in an art gallery, where we explored visitor practices of commenting on art, and how they were shaped in interaction with a newly designed collocated, collaborative writing technology. In particular we investigate what potentials previous practices carry...... with them that may affect early use and further development of use. We base our analyses on interviews in the art gallery and on socio-cultural theories of artefactmediated learning and collaboration. The analyses help identify three forms of collaborative writing, which are placed in the space between...

  4. Adaptive collocation method for simultaneous heat and mass diffusion with phase change

    International Nuclear Information System (INIS)

    Chawla, T.C.; Leaf, G.; Minkowycz, W.J.; Pedersen, D.R.; Shouman, A.R.

    1983-01-01

    The present study is carried out to determine melting rates of a lead slab of various thicknesses by contact with sodium coolant and to evaluate the extent of penetration and the mixing rates of molten lead into liquid sodium by molecular diffusion alone. The study shows that these two calculations cannot be performed simultaneously without the use of adaptive coordinates which cause considerable stretching of the physical coordinates for mass diffusion. Because of the large difference in densities of these two liquid metals, the traditional constant density approximation for the calculation of mass diffusion cannot be used for studying their interdiffusion. The use of orthogonal collocation method along with adaptive coordinates produces extremely accurate results which are ascertained by comparing with the existing analytical solutions for concentration distribution for the case of constant density approximation and for melting rates for the case of infinite lead slab

  5. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bäck, Joakim

    2010-09-17

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.

  6. Application of the orthogonal collocation method to determination of temperature distribution in cylindrical conductors

    International Nuclear Information System (INIS)

    Fortini, Maria A.; Stamoulis, Michel N.; Ferreira, Angela F.M.; Pereira, Claubia; Costa, Antonella L.; Silva, Clarysson A.M.

    2008-01-01

    In this work, an analytical model for the determination of the temperature distribution in cylindrical heater components with characteristics of nuclear fuel rods, is presented. The heat conductor is characterized by an arbitrary number of solid walls and different types of materials, whose thermal properties are taken as function of temperature. The heat conduction fundamental equation is solved numerically with the method of weighted residuals (MWR) using a technique of orthogonal collocation. The results obtained with the proposed method are compared with the experimental data from tests performed in the TRIGA IPR-R1 research reactor localized at the CDTN/CNEN (Centro de Desenvolvimento da Tecnologia Nuclear/Comissao Nacional de Energia Nuclear) at Belo Horizonte in Brazil

  7. Demonstration of non-collocated vibration control of a flexible manipulator using electrical dynamic absorbers

    International Nuclear Information System (INIS)

    Kim, Sang-Myeong; Kim, Heungseob; Boo, Kwangsuck; Brennan, Michael J

    2013-01-01

    This paper describes an experimental study into the vibration control of a servo system comprising a servo motor and a flexible manipulator. Two modes of the system are controlled by using the servo motor and an accelerometer attached to the tip of the flexible manipulator. The control system is thus non-collocated. It consists of two electrical dynamic absorbers, each of which consists of a modal filter and, in case of an out-of-phase mode, a phase inverter. The experimental results show that each absorber acts as a mechanical dynamic vibration absorber attached to each mode and significantly reduces the settling time for the system response to a step input. (technical note)

  8. A fast collocation method for a variable-coefficient nonlocal diffusion model

    Science.gov (United States)

    Wang, Che; Wang, Hong

    2017-02-01

    We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

  9. Forecasting the daily power output of a grid-connected photovoltaic system based on multivariate adaptive regression splines

    International Nuclear Information System (INIS)

    Li, Yanting; He, Yong; Su, Yan; Shu, Lianjie

    2016-01-01

    Highlights: • Suggests a nonparametric model based on MARS for output power prediction. • Compare the MARS model with a wide variety of prediction models. • Show that the MARS model is able to provide an overall good performance in both the training and testing stages. - Abstract: Both linear and nonlinear models have been proposed for forecasting the power output of photovoltaic systems. Linear models are simple to implement but less flexible. Due to the stochastic nature of the power output of PV systems, nonlinear models tend to provide better forecast than linear models. Motivated by this, this paper suggests a fairly simple nonlinear regression model known as multivariate adaptive regression splines (MARS), as an alternative to forecasting of solar power output. The MARS model is a data-driven modeling approach without any assumption about the relationship between the power output and predictors. It maintains simplicity of the classical multiple linear regression (MLR) model while possessing the capability of handling nonlinearity. It is simpler in format than other nonlinear models such as ANN, k-nearest neighbors (KNN), classification and regression tree (CART), and support vector machine (SVM). The MARS model was applied on the daily output of a grid-connected 2.1 kW PV system to provide the 1-day-ahead mean daily forecast of the power output. The comparisons with a wide variety of forecast models show that the MARS model is able to provide reliable forecast performance.

  10. Estimation of arterial arrival time and cerebral blood flow from QUASAR arterial spin labeling using stable spline.

    Science.gov (United States)

    Castellaro, Marco; Peruzzo, Denis; Mehndiratta, Amit; Pillonetto, Gianluigi; Petersen, Esben Thade; Golay, Xavier; Chappell, Michael A; Bertoldo, Alessandra

    2015-12-01

    QUASAR arterial spin labeling (ASL) permits the application of deconvolution approaches for the absolute quantification of cerebral perfusion. Currently, oscillation index regularized singular value decomposition (oSVD) combined with edge-detection (ED) is the most commonly used method. Its major drawbacks are nonphysiological oscillations in the impulse response function and underestimation of perfusion. The aim of this work is to introduce a novel method to overcome these limitations. A system identification method, stable spline (SS), was extended to address ASL peculiarities such as the delay in arrival of the arterial blood in the tissue. The proposed framework was compared with oSVD + ED in both simulated and real data. SS was used to investigate the validity of using a voxel-wise tissue T1 value instead of using a single global value (of blood T1 ). SS outperformed oSVD + ED in 79.9% of simulations. When applied to real data, SS exhibited a physiologically realistic range for perfusion and a higher mean value with respect to oSVD + ED (55.5 ± 9.5 SS, 34.9 ± 5.2 oSVD + ED mL/100 g/min). SS can represent an alternative to oSVD + ED for the quantification of QUASAR ASL data. Analysis of the retrieved impulse response function revealed that using a voxel wise tissue T1 might be suboptimal. © 2014 Wiley Periodicals, Inc.

  11. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  12. An Analysis of Peak Wind Speed Data from Collocated Mechanical and Ultrasonic Anemometers

    Science.gov (United States)

    Short, David A.; Wells, Leonard; Merceret, Francis J.; Roeder, William P.

    2007-01-01

    This study compared peak wind speeds reported by mechanical and ultrasonic anemometers at Cape Canaveral Air Force Station and Kennedy Space Center (CCAFS/KSC) on the east central coast of Florida and Vandenberg Air Force Base (VAFB) on the central coast of California. Launch Weather Officers, forecasters, and Range Safety analysts need to understand the performance of wind sensors at CCAFS/KSC and VAFB for weather warnings, watches, advisories, special ground processing operations, launch pad exposure forecasts, user Launch Commit Criteria (LCC) forecasts and evaluations, and toxic dispersion support. The legacy CCAFS/KSC and VAFB weather tower wind instruments are being changed from propeller-and-vane (CCAFS/KSC) and cup-and-vane (VAFB) sensors to ultrasonic sensors under the Range Standardization and Automation (RSA) program. Mechanical and ultrasonic wind measuring techniques are known to cause differences in the statistics of peak wind speed as shown in previous studies. The 45th Weather Squadron (45 WS) and the 30th Weather Squadron (30 WS) requested the Applied Meteorology Unit (AMU) to compare data between the RSA ultrasonic and legacy mechanical sensors to determine if there are significant differences. Note that the instruments were sited outdoors under naturally varying conditions and that this comparison was not designed to verify either technology. Approximately 3 weeks of mechanical and ultrasonic wind data from each range from May and June 2005 were used in this study. The CCAFS/KSC data spanned the full diurnal cycle, while the VAFB data were confined to 1000-1600 local time. The sample of 1-minute data from numerous levels on five different towers on each range totaled more than 500,000 minutes of data (482,979 minutes of data after quality control). The ten towers were instrumented at several levels, ranging from 12 ft to 492 ft above ground level. The ultrasonic sensors were collocated at the same vertical levels as the mechanical sensors and

  13. Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.

  14. Thin-plate spline analysis of the cranial base in subjects with Class III malocclusion.

    Science.gov (United States)

    Singh, G D; McNamara, J A; Lozanoff, S

    1997-08-01

    The role of the cranial base in the emergence of Class III malocclusion is not fully understood. This study determines deformations that contribute to a Class III cranial base morphology, employing thin-plate spline analysis on lateral cephalographs. A total of 73 children of European-American descent aged between 5 and 11 years of age with Class III malocclusion were compared with an equivalent group of subjects with a normal, untreated, Class I molar occlusion. The cephalographs were traced, checked and subdivided into seven age- and sex-matched groups. Thirteen points on the cranial base were identified and digitized. The datasets were scaled to an equivalent size, and statistical analysis indicated significant differences between average Class I and Class III cranial base morphologies for each group. Thin-plate spline analysis indicated that both affine (uniform) and non-affine transformations contribute toward the total spline for each average cranial base morphology at each age group analysed. For non-affine transformations, Partial warps 10, 8 and 7 had high magnitudes, indicating large-scale deformations affecting Bolton point, basion, pterygo-maxillare, Ricketts' point and articulare. In contrast, high eigenvalues associated with Partial warps 1-3, indicating localized shape changes, were found at tuberculum sellae, sella, and the frontonasomaxillary suture. It is concluded that large spatial-scale deformations affect the occipital complex of the cranial base and sphenoidal region, in combination with localized distortions at the frontonasal suture. These deformations may contribute to reduced orthocephalization or deficient flattening of the cranial base antero-posteriorly that, in turn, leads to the formation of a Class III malocclusion.

  15. Estimating trajectories of energy intake through childhood and adolescence using linear-spline multilevel models.

    Science.gov (United States)

    Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D

    2013-07-01

    Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.

  16. A volume of fluid method based on multidimensional advection and spline interface reconstruction

    International Nuclear Information System (INIS)

    Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.

    2004-01-01

    A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps

  17. Computer simulation comparison of tripolar, bipolar, and spline Laplacian electrocadiogram estimators.

    Science.gov (United States)

    Chen, T; Besio, W; Dai, W

    2009-01-01

    A comparison of the performance of the tripolar and bipolar concentric as well as spline Laplacian electrocardiograms (LECGs) and body surface Laplacian mappings (BSLMs) for localizing and imaging the cardiac electrical activation has been investigated based on computer simulation. In the simulation a simplified eccentric heart-torso sphere-cylinder homogeneous volume conductor model were developed. Multiple dipoles with different orientations were used to simulate the underlying cardiac electrical activities. Results show that the tripolar concentric ring electrodes produce the most accurate LECG and BSLM estimation among the three estimators with the best performance in spatial resolution.

  18. Gaussian quadrature rules for C 1 quintic splines with uniform knot vectors

    KAUST Repository

    Barton, Michael; Ait-Haddou, Rachid; Calo, Victor Manuel

    2017-01-01

    We provide explicit quadrature rules for spaces of C1C1 quintic splines with uniform knot sequences over finite domains. The quadrature nodes and weights are derived via an explicit recursion that avoids numerical solvers. Each rule is optimal, that is, requires the minimal number of nodes, for a given function space. For each of nn subintervals, generically, only two nodes are required which reduces the evaluation cost by 2/32/3 when compared to the classical Gaussian quadrature for polynomials over each knot span. Numerical experiments show fast convergence, as nn grows, to the “two-third” quadrature rule of Hughes et al. (2010) for infinite domains.

  19. Gaussian quadrature rules for C 1 quintic splines with uniform knot vectors

    KAUST Repository

    Bartoň, Michael

    2017-03-21

    We provide explicit quadrature rules for spaces of C1C1 quintic splines with uniform knot sequences over finite domains. The quadrature nodes and weights are derived via an explicit recursion that avoids numerical solvers. Each rule is optimal, that is, requires the minimal number of nodes, for a given function space. For each of nn subintervals, generically, only two nodes are required which reduces the evaluation cost by 2/32/3 when compared to the classical Gaussian quadrature for polynomials over each knot span. Numerical experiments show fast convergence, as nn grows, to the “two-third” quadrature rule of Hughes et al. (2010) for infinite domains.

  20. Numerical solution of the controlled Duffing oscillator by semi-orthogonal spline wavelets

    International Nuclear Information System (INIS)

    Lakestani, M; Razzaghi, M; Dehghan, M

    2006-01-01

    This paper presents a numerical method for solving the controlled Duffing oscillator. The method can be extended to nonlinear calculus of variations and optimal control problems. The method is based upon compactly supported linear semi-orthogonal B-spline wavelets. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique

  1. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Crecy, F de

    1994-12-31

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs.

  2. Registration of segmented histological images using thin plate splines and belief propagation

    Science.gov (United States)

    Kybic, Jan

    2014-03-01

    We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.

  3. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  4. Tikhonov regularization method for the numerical inversion of Mellin transforms using splines

    International Nuclear Information System (INIS)

    Iqbal, M.

    2005-01-01

    Mellin transform is an ill-posed problem. These problems arise in many branches of science and engineering. In the typical situation one is interested in recovering the original function, given a finite number of noisy measurements of data. In this paper, we shall convert Mellin transform to Laplace transform and then an integral equation of the first kind of convolution type. We solve the integral equation using Tikhonov regularization with splines as basis function. The method is applied to various test examples in the literature and results are shown in the table

  5. The high-level error bound for shifted surface spline interpolation

    OpenAIRE

    Luh, Lin-Tian

    2006-01-01

    Radial function interpolation of scattered data is a frequently used method for multivariate data fitting. One of the most frequently used radial functions is called shifted surface spline, introduced by Dyn, Levin and Rippa in \\cite{Dy1} for $R^{2}$. Then it's extended to $R^{n}$ for $n\\geq 1$. Many articles have studied its properties, as can be seen in \\cite{Bu,Du,Dy2,Po,Ri,Yo1,Yo2,Yo3,Yo4}. When dealing with this function, the most commonly used error bounds are the one raised by Wu and S...

  6. About the Modeling of Radio Source Time Series as Linear Splines

    Science.gov (United States)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  7. Complex wavenumber Fourier analysis of the B-spline based finite element method

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav

    2014-01-01

    Roč. 51, č. 2 (2014), s. 348-359 ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479

  8. Generalized Lagrangian Jacobi Gauss collocation method for solving unsteady isothermal gas through a micro-nano porous medium

    Science.gov (United States)

    Parand, Kourosh; Latifi, Sobhan; Delkhosh, Mehdi; Moayeri, Mohammad M.

    2018-01-01

    In the present paper, a new method based on the Generalized Lagrangian Jacobi Gauss (GLJG) collocation method is proposed. The nonlinear Kidder equation, which explains unsteady isothermal gas through a micro-nano porous medium, is a second-order two-point boundary value ordinary differential equation on the unbounded interval [0, ∞). Firstly, using the quasilinearization method, the equation is converted to a sequence of linear ordinary differential equations. Then, by using the GLJG collocation method, the problem is reduced to solving a system of algebraic equations. It must be mentioned that this equation is solved without domain truncation and variable changing. A comparison with some numerical solutions made and the obtained results indicate that the presented solution is highly accurate. The important value of the initial slope, y'(0), is obtained as -1.191790649719421734122828603800159364 for η = 0.5. Comparing to the best result obtained so far, it is accurate up to 36 decimal places.

  9. The current strain distribution in the North China Basin of eastern China by least-squares collocation

    Science.gov (United States)

    Wu, J. C.; Tang, H. W.; Chen, Y. Q.; Li, Y. X.

    2006-07-01

    In this paper, the velocities of 154 stations obtained in 2001 and 2003 GPS survey campaigns are applied to formulate a continuous velocity field by the least-squares collocation method. The strain rate field obtained by the least-squares collocation method shows more clear deformation patterns than that of the conventional discrete triangle method. The significant deformation zones obtained are mainly located in three places, to the north of Tangshan, between Tianjing and Shijiazhuang, and to the north of Datong, which agree with the places of the Holocene active deformation zones obtained by geological investigations. The maximum shear strain rate is located at latitude 38.6°N and longitude 116.8°E, with a magnitude of 0.13 ppm/a. The strain rate field obtained can be used for earthquake prediction research in the North China Basin.

  10. Classical solutions of two dimensional Stokes problems on non smooth domains. 2: Collocation method for the Radon equation

    International Nuclear Information System (INIS)

    Lubuma, M.S.

    1991-05-01

    The non uniquely solvable Radon boundary integral equation for the two-dimensional Stokes-Dirichlet problem on a non smooth domain is transformed into a well posed one by a suitable compact perturbation of the velocity double layer potential operator. The solution to the modified equation is decomposed into a regular part and a finite linear combination of intrinsic singular functions whose coefficients are computed from explicit formulae. Using these formulae, the classical collocation method, defined by continuous piecewise linear vector-valued basis functions, which converges slowly because of the lack of regularity of the solution, is improved into a collocation dual singular function method with optimal rates of convergence for the solution and for the coefficients of singularities. (author). 34 refs

  11. Runge-Kutta and Hermite Collocation for a biological invasion problem modeled by a generalized Fisher equation

    International Nuclear Information System (INIS)

    Athanasakis, I E; Papadopoulou, E P; Saridakis, Y G

    2014-01-01

    Fisher's equation has been widely used to model the biological invasion of single-species communities in homogeneous one dimensional habitats. In this study we develop high order numerical methods to accurately capture the spatiotemporal dynamics of the generalized Fisher equation, a nonlinear reaction-diffusion equation characterized by density dependent non-linear diffusion. Working towards this direction we consider strong stability preserving Runge-Kutta (RK) temporal discretization schemes coupled with the Hermite cubic Collocation (HC) spatial discretization method. We investigate their convergence and stability properties to reveal efficient HC-RK pairs for the numerical treatment of the generalized Fisher equation. The Hadamard product is used to characterize the collocation discretized non linear equation terms as a first step for the treatment of generalized systems of relevant equations. Numerical experimentation is included to demonstrate the performance of the methods

  12. Two-dimensional Haar wavelet Collocation Method for the solution of Stationary Neutron Transport Equation in a homogeneous isotropic medium

    International Nuclear Information System (INIS)

    Patra, A.; Saha Ray, S.

    2014-01-01

    Highlights: • A stationary transport equation has been solved using the technique of Haar wavelet Collocation Method. • This paper intends to provide the great utility of Haar wavelets to nuclear science problem. • In the present paper, two-dimensional Haar wavelets are applied. • The proposed method is mathematically very simple, easy and fast. - Abstract: This paper emphasizes on finding the solution for a stationary transport equation using the technique of Haar wavelet Collocation Method (HWCM). Haar wavelet Collocation Method is efficient and powerful in solving wide class of linear and nonlinear differential equations. Recently Haar wavelet transform has gained the reputation of being a very effective tool for many practical applications. This paper intends to provide the great utility of Haar wavelets to nuclear science problem. In the present paper, two-dimensional Haar wavelets are applied for solution of the stationary Neutron Transport Equation in homogeneous isotropic medium. The proposed method is mathematically very simple, easy and fast. To demonstrate about the efficiency of the method, one test problem is discussed. It can be observed from the computational simulation that the numerical approximate solution is much closer to the exact solution

  13. Comparing multiple model-derived aerosol optical properties to spatially collocated ground-based and satellite measurements

    Science.gov (United States)

    Ocko, Ilissa B.; Ginoux, Paul A.

    2017-04-01

    Anthropogenic aerosols are a key factor governing Earth's climate and play a central role in human-caused climate change. However, because of aerosols' complex physical, optical, and dynamical properties, aerosols are one of the most uncertain aspects of climate modeling. Fortunately, aerosol measurement networks over the past few decades have led to the establishment of long-term observations for numerous locations worldwide. Further, the availability of datasets from several different measurement techniques (such as ground-based and satellite instruments) can help scientists increasingly improve modeling efforts. This study explores the value of evaluating several model-simulated aerosol properties with data from spatially collocated instruments. We compare aerosol optical depth (AOD; total, scattering, and absorption), single-scattering albedo (SSA), Ångström exponent (α), and extinction vertical profiles in two prominent global climate models (Geophysical Fluid Dynamics Laboratory, GFDL, CM2.1 and CM3) to seasonal observations from collocated instruments (AErosol RObotic NETwork, AERONET, and Cloud-Aerosol Lidar with Orthogonal Polarization, CALIOP) at seven polluted and biomass burning regions worldwide. We find that a multi-parameter evaluation provides key insights on model biases, data from collocated instruments can reveal underlying aerosol-governing physics, column properties wash out important vertical distinctions, and improved models does not mean all aspects are improved. We conclude that it is important to make use of all available data (parameters and instruments) when evaluating aerosol properties derived by models.

  14. Non-stationary covariance function modelling in 2D least-squares collocation

    Science.gov (United States)

    Darbeheshti, N.; Featherstone, W. E.

    2009-06-01

    Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.

  15. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  16. Structure soil structure interaction effects: Seismic analysis of safety related collocated concrete structures

    International Nuclear Information System (INIS)

    Joshi, J.R.

    2000-01-01

    The Process, Purification and Stack Buildings are collocated safety related concrete shear wall structures with plan dimensions in excess of 100 feet. An important aspect of their seismic analysis was the determination of structure soil structure interaction (SSSI) effects, if any. The SSSI analysis of the Process Building, with one other building at a time, was performed with the SASSI computer code for up to 50 frequencies. Each combined model had about 1500 interaction nodes. Results of the SSSI analysis were compared with those from soil structure interaction (SSI) analysis of the individual buildings, done with ABAQUS and SASSI codes, for three parameters: peak accelerations, seismic forces and the in-structure floor response spectra (FRS). The results may be of wider interest due to the model size and the potential applicability to other deep soil layered sites. Results obtained from the ABAQUS analysis were consistently higher, as expected, than those from the SSI and SSSI analyses using the SASSI. The SSSI effect between the Process and Purification Buildings was not significant. The Process and Stack Building results demonstrated that under certain conditions a massive structure can have an observable effect on the seismic response of a smaller and less stiff structure

  17. On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods

    KAUST Repository

    Beck, Joakim; Tempone, Raul; Nobile, Fabio; Tamellini, Lorenzo

    2012-01-01

    In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.

  18. Hybrid motion sensing and experimental modal analysis using collocated smartphone camera and accelerometers

    International Nuclear Information System (INIS)

    Ozer, Ekin; Feng, Dongming; Feng, Maria Q

    2017-01-01

    State-of-the-art multisensory technologies and heterogeneous sensor networks propose a wide range of response measurement opportunities for structural health monitoring (SHM). Measuring and fusing different physical quantities in terms of structural vibrations can provide alternative acquisition methods and improve the quality of the modal testing results. In this study, a recently introduced SHM concept, SHM with smartphones, is focused to utilize multisensory smartphone features for a hybridized structural vibration response measurement framework. Based on vibration testing of a small-scale multistory laboratory model, displacement and acceleration responses are monitored using two different smartphone sensors, an embedded camera and accelerometer, respectively. Double-integration or differentiation among different measurement types is performed to combine multisensory measurements on a comparative basis. In addition, distributed sensor signals from collocated devices are processed for modal identification, and performance of smartphone-based sensing platforms are tested under different configuration scenarios and heterogeneity levels. The results of these tests show a novel and successful implementation of a hybrid motion sensing platform through multiple sensor type and device integration. Despite the heterogeneity of motion data obtained from different smartphone devices and technologies, it is shown that multisensory response measurements can be blended for experimental modal analysis. Getting benefit from the accessibility of smartphone technology, similar smartphone-based dynamic testing methodologies can provide innovative SHM solutions with mobile, programmable, and cost-free interfaces. (paper)

  19. Hybrid motion sensing and experimental modal analysis using collocated smartphone camera and accelerometers

    Science.gov (United States)

    Ozer, Ekin; Feng, Dongming; Feng, Maria Q.

    2017-10-01

    State-of-the-art multisensory technologies and heterogeneous sensor networks propose a wide range of response measurement opportunities for structural health monitoring (SHM). Measuring and fusing different physical quantities in terms of structural vibrations can provide alternative acquisition methods and improve the quality of the modal testing results. In this study, a recently introduced SHM concept, SHM with smartphones, is focused to utilize multisensory smartphone features for a hybridized structural vibration response measurement framework. Based on vibration testing of a small-scale multistory laboratory model, displacement and acceleration responses are monitored using two different smartphone sensors, an embedded camera and accelerometer, respectively. Double-integration or differentiation among different measurement types is performed to combine multisensory measurements on a comparative basis. In addition, distributed sensor signals from collocated devices are processed for modal identification, and performance of smartphone-based sensing platforms are tested under different configuration scenarios and heterogeneity levels. The results of these tests show a novel and successful implementation of a hybrid motion sensing platform through multiple sensor type and device integration. Despite the heterogeneity of motion data obtained from different smartphone devices and technologies, it is shown that multisensory response measurements can be blended for experimental modal analysis. Getting benefit from the accessibility of smartphone technology, similar smartphone-based dynamic testing methodologies can provide innovative SHM solutions with mobile, programmable, and cost-free interfaces.

  20. Prediction of Navigation Satellite Clock Bias Considering Clock's Stochastic Variation Behavior with Robust Least Square Collocation

    Directory of Open Access Journals (Sweden)

    WANG Yupu

    2016-06-01

    Full Text Available In order to better express the characteristic of satellite clock bias (SCB and further improve its prediction precision, a new SCB prediction model is proposed, which can take the physical feature, cyclic variation and stochastic variation behaviors of the space-borne atomic clock into consideration by using a robust least square collocation (LSC method. The proposed model firstly uses a quadratic polynomial model with periodic terms to fit and abstract the trend term and cyclic terms of SCB. Then for the residual stochastic variation part and possible gross errors hidden in SCB data, the model employs a robust LSC method to process them. The covariance function of the LSC is determined by selecting an empirical function and combining SCB prediction tests. Using the final precise IGS SCB products to conduct prediction tests, the results show that the proposed model can get better prediction performance. Specifically, the results' prediction accuracy can enhance 0.457 ns and 0.948 ns respectively, and the corresponding prediction stability can improve 0.445 ns and 1.233 ns, compared with the results of quadratic polynomial model and grey model. In addition, the results also show that the proposed covariance function corresponding to the new model is reasonable.