WorldWideScience

Sample records for spline regression functions

  1. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  2. Flexible regression models with cubic splines.

    Science.gov (United States)

    Durrleman, S; Simon, R

    1989-05-01

    We describe the use of cubic splines in regression models to represent the relationship between the response variable and a vector of covariates. This simple method can help prevent the problems that result from inappropriate linearity assumptions. We compare restricted cubic spline regression to non-parametric procedures for characterizing the relationship between age and survival in the Stanford Heart Transplant data. We also provide an illustrative example in cancer therapeutics.

  3. Segmented Regression Based on B-Splines with Solved Examples

    Directory of Open Access Journals (Sweden)

    Miloš Kaňka

    2015-12-01

    Full Text Available The subject of the paper is segmented linear, quadratic, and cubic regression based on B-spline basis functions. In this article we expose the formulas for the computation of B-splines of order one, two, and three that is needed to construct linear, quadratic, and cubic regression. We list some interesting properties of these functions. For a clearer understanding we give the solutions of a couple of elementary exercises regarding these functions.

  4. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  5. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  6. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  7. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    Science.gov (United States)

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  8. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  9. Random regression analyses using B-splines to model growth of Australian Angus cattle

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2005-09-01

    Full Text Available Abstract Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error.

  10. Semisupervised feature selection via spline regression for video semantic recognition.

    Science.gov (United States)

    Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang

    2015-02-01

    To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.

  11. Univariate Cubic L1 Interpolating Splines: Spline Functional, Window Size and Analysis-based Algorithm

    Directory of Open Access Journals (Sweden)

    Shu-Cherng Fang

    2010-08-01

    Full Text Available We compare univariate L1 interpolating splines calculated on 5-point windows, on 7-point windows and on global data sets using four different spline functionals, namely, ones based on the second derivative, the first derivative, the function value and the antiderivative. Computational results indicate that second-derivative-based 5-point-window L1 splines preserve shape as well as or better than the other types of L1 splines. To calculate second-derivative-based 5-point-window L1 splines, we introduce an analysis-based, parallelizable algorithm. This algorithm is orders of magnitude faster than the previously widely used primal affine algorithm.

  12. Placing Spline Knots in Neural Networks Using Splines as Activation Functions

    Czech Academy of Sciences Publication Activity Database

    Hlaváčková, Kateřina; Verleysen, M.

    1997-01-01

    Roč. 17, 3/4 (1997), s. 159-166 ISSN 0925-2312 R&D Projects: GA ČR GA201/93/0427; GA ČR GA201/96/0971 Keywords : cubic -spline function * approximation error * knots of spline function * feedforward neural network Impact factor: 0.422, year: 1997

  13. Cubic spline functions for curve fitting

    Science.gov (United States)

    Young, J. D.

    1972-01-01

    FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.

  14. Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression

    Science.gov (United States)

    Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.

    2015-12-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.

  15. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    -parametric regression based on kernel estimators. This approach combines the virtues of the DEA and the SFA, while avoiding their drawbacks: it avoids the specification of a functional form and at the same time accounts for statistical noise. More recently, this approach was used by Henderson and Simar (2005...... is criticised, because it cannot account for statistical noise such as random production shocks and measurement errors, which are inherent in more or less all production data sets. In contrast, the SFA is criticised, because it requires the specification of a functional form, which involves the risk......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...

  16. A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.

    Science.gov (United States)

    Li, Chin-Shang; Tu, Wanzhu

    2007-05-01

    In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.

  17. Biomechanical Analysis with Cubic Spline Functions

    Science.gov (United States)

    McLaughlin, Thomas M.; And Others

    1977-01-01

    Results of experimentation suggest that the cubic spline is a convenient and consistent method for providing an accurate description of displacement-time data and for obtaining the corresponding time derivatives. (MJB)

  18. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    The estimation of the technical efficiency comprises a vast literature in the field of applied production economics. There are two predominant approaches: the non-parametric and non-stochastic Data Envelopment Analysis (DEA) and the parametric Stochastic Frontier Analysis (SFA). The DEA...... of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non-parametric......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...

  19. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather

  20. A smoothing algorithm using cubic spline functions

    Science.gov (United States)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  1. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines

    Directory of Open Access Journals (Sweden)

    Laura M. Grajeda

    2016-01-01

    Full Text Available Abstract Background Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. Methods We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Results Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001 when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001 and slopes (p < 0.001 of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001, which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and

  2. Application of Semiparametric Spline Regression Model in Analyzing Factors that In uence Population Density in Central Java

    Science.gov (United States)

    Sumantari, Y. D.; Slamet, I.; Sugiyanto

    2017-06-01

    Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.

  3. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    Science.gov (United States)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  4. PM10 modeling in the Oviedo urban area (Northern Spain) by using multivariate adaptive regression splines

    Science.gov (United States)

    Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza

    2014-10-01

    The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of

  5. ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES

    Directory of Open Access Journals (Sweden)

    Joel Domínguez Viveros

    2015-04-01

    Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year – season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.

  6. The Norwegian Healthier Goats program--modeling lactation curves using a multilevel cubic spline regression model.

    Science.gov (United States)

    Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S

    2014-07-01

    In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014

  7. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    Science.gov (United States)

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  8. Quantile regression and restricted cubic splines are useful for exploring relationships between continuous variables.

    Science.gov (United States)

    Marrie, Ruth Ann; Dawson, Neal V; Garland, Allan

    2009-05-01

    Ordinary least squares (OLS) regression, commonly called linear regression, is often used to assess, or adjust for, the relationship between a continuous independent variable and the mean of a continuous dependent variable, implicitly assuming a linear relationship between them. Linearity may not hold, however, and analyzing the mean of the dependent variable may not capture the full nature of such relationships. Our goal is to demonstrate how combined use of quantile regression and restricted cubic splines (RCS) can reveal the true nature and complexity of relationships between continuous variables. We provide a review of methodologic concepts, followed by two examples using real data sets. In the first example, we analyzed the relationship between cognition and disease duration in multiple sclerosis. In the second example, we analyzed the relationship between length of stay (LOS) and severity of illness in the intensive care unit (ICU). In both examples, quantile regression showed that the relationship between the variables of interest was heterogeneous. In the second example, RCS uncovered nonlinearity of the relationship between severity of illness and length of stay. Together, quantile regression and RCS are a powerful combination for exploring relationships between continuous variables.

  9. A Note on Penalized Regression Spline Estimation in the Secondary Analysis of Case-Control Data

    KAUST Repository

    Gazioglu, Suzan

    2013-05-25

    Primary analysis of case-control studies focuses on the relationship between disease (D) and a set of covariates of interest (Y, X). A secondary application of the case-control study, often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated due to the case-control sampling, and to avoid the biased sampling that arises from the design, it is typical to use the control data only. In this paper, we develop penalized regression spline methodology that uses all the data, and improves precision of estimation compared to using only the controls. A simulation study and an empirical example are used to illustrate the methodology.

  10. USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Otgon Cristian

    2011-07-01

    Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third

  11. Evaluation of Logistic Regression and Multivariate Adaptive Regression Spline Models for Groundwater Potential Mapping Using R and GIS

    Directory of Open Access Journals (Sweden)

    Soyoung Park

    2017-07-01

    Full Text Available This study mapped and analyzed groundwater potential using two different models, logistic regression (LR and multivariate adaptive regression splines (MARS, and compared the results. A spatial database was constructed for groundwater well data and groundwater influence factors. Groundwater well data with a high potential yield of ≥70 m3/d were extracted, and 859 locations (70% were used for model training, whereas the other 365 locations (30% were used for model validation. We analyzed 16 groundwater influence factors including altitude, slope degree, slope aspect, plan curvature, profile curvature, topographic wetness index, stream power index, sediment transport index, distance from drainage, drainage density, lithology, distance from fault, fault density, distance from lineament, lineament density, and land cover. Groundwater potential maps (GPMs were constructed using LR and MARS models and tested using a receiver operating characteristics curve. Based on this analysis, the area under the curve (AUC for the success rate curve of GPMs created using the MARS and LR models was 0.867 and 0.838, and the AUC for the prediction rate curve was 0.836 and 0.801, respectively. This implies that the MARS model is useful and effective for groundwater potential analysis in the study area.

  12. Air quality modeling in the Oviedo urban area (NW Spain) by using multivariate adaptive regression splines.

    Science.gov (United States)

    Nieto, P J García; Antón, J C Álvarez; Vilán, J A Vilán; García-Gonzalo, E

    2015-05-01

    The aim of this research work is to build a regression model of air quality by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (northern Spain) at a local scale. To accomplish the objective of this study, the experimental data set made up of nitrogen oxides (NO x ), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3), and dust (PM10) was collected over 3 years (2006-2008). The US National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the MARS technique, conclusions of this research work are exposed.

  13. A Parametric Model of the LARCODEMS Heavy Media Separator by Means of Multivariate Adaptive Regression Splines

    Directory of Open Access Journals (Sweden)

    Mario Menéndez Álvarez

    2017-06-01

    Full Text Available Modeling of a cylindrical heavy media separator has been conducted in order to predict its optimum operating parameters. As far as it is known by the authors, this is the first application in the literature. The aim of the present research is to predict the separation efficiency based on the adjustment of the device’s dimensions and media flow rates. A variety of heavy media separators exist that are extensively used to separate particles by density. There is a growing importance in their application in the recycling sector. The cylindrical variety is reported to be the most suited for processing a large range of particle sizes, but optimizing its operating parameters remains to be documented. The multivariate adaptive regression splines methodology has been applied in order to predict the separation efficiencies using, as inputs, the device dimension and media flow rate variables. The results obtained show that it is possible to predict the device separation efficiency according to laboratory experiments performed and, therefore, forecast results obtainable with different operating conditions.

  14. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age

    Directory of Open Access Journals (Sweden)

    Marko Wilke

    2018-02-01

    Full Text Available This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1–75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender as well as technical (field strength, data quality predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php. Keywords: MRI template creation, Multivariate adaptive regression splines, DARTEL, Structural MRI

  15. Transport modeling and multivariate adaptive regression splines for evaluating performance of ASR systems in freshwater aquifers

    Science.gov (United States)

    Forghani, Ali; Peralta, Richard C.

    2017-10-01

    The study presents a procedure using solute transport and statistical models to evaluate the performance of aquifer storage and recovery (ASR) systems designed to earn additional water rights in freshwater aquifers. The recovery effectiveness (REN) index quantifies the performance of these ASR systems. REN is the proportion of the injected water that the same ASR well can recapture during subsequent extraction periods. To estimate REN for individual ASR wells, the presented procedure uses finely discretized groundwater flow and contaminant transport modeling. Then, the procedure uses multivariate adaptive regression splines (MARS) analysis to identify the significant variables affecting REN, and to identify the most recovery-effective wells. Achieving REN values close to 100% is the desire of the studied 14-well ASR system operator. This recovery is feasible for most of the ASR wells by extracting three times the injectate volume during the same year as injection. Most of the wells would achieve RENs below 75% if extracting merely the same volume as they injected. In other words, recovering almost all the same water molecules that are injected requires having a pre-existing water right to extract groundwater annually. MARS shows that REN most significantly correlates with groundwater flow velocity, or hydraulic conductivity and hydraulic gradient. MARS results also demonstrate that maximizing REN requires utilizing the wells located in areas with background Darcian groundwater velocities less than 0.03 m/d. The study also highlights the superiority of MARS over regular multiple linear regressions to identify the wells that can provide the maximum REN. This is the first reported application of MARS for evaluating performance of an ASR system in fresh water aquifers.

  16. Multivariate adaptive regression splines and neural network models for prediction of pile drivability

    Directory of Open Access Journals (Sweden)

    Wengang Zhang

    2016-01-01

    Full Text Available Piles are long, slender structural elements used to transfer the loads from the superstructure through weak strata onto stiffer soils or rocks. For driven piles, the impact of the piling hammer induces compression and tension stresses in the piles. Hence, an important design consideration is to check that the strength of the pile is sufficient to resist the stresses caused by the impact of the pile hammer. Due to its complexity, pile drivability lacks a precise analytical solution with regard to the phenomena involved. In situations where measured data or numerical hypothetical results are available, neural networks stand out in mapping the nonlinear interactions and relationships between the system's predictors and dependent responses. In addition, unlike most computational tools, no mathematical relationship assumption between the dependent and independent variables has to be made. Nevertheless, neural networks have been criticized for their long trial-and-error training process since the optimal configuration is not known a priori. This paper investigates the use of a fairly simple nonparametric regression algorithm known as multivariate adaptive regression splines (MARS, as an alternative to neural networks, to approximate the relationship between the inputs and dependent response, and to mathematically interpret the relationship between the various parameters. In this paper, the Back propagation neural network (BPNN and MARS models are developed for assessing pile drivability in relation to the prediction of the Maximum compressive stresses (MCS, Maximum tensile stresses (MTS, and Blow per foot (BPF. A database of more than four thousand piles is utilized for model development and comparative performance between BPNN and MARS predictions.

  17. Predictors of anemia after bariatric surgery using multivariate adaptive regression splines.

    Science.gov (United States)

    Lee, Yi-Chih; Lee, Tian-Shyug; Lee, Wei-Jei; Lin, Yang-Chu; Lee, Chia-Ko; Liew, Phui-Ly

    2012-01-01

    Anemia is the most common nutritional deficiency after bariatric surgery. The predictors of anemia have not been clearly identified. This issue is useful for selecting an appropriate surgery procedure for morbid obesity. From December 2000 to October 2007, a retrospective study of 442 obese patients after bariatric surgery with two years' follow-up data was conducted. Anemia was defined by hemoglobin (Hb) under 13mg/dL in male and 11.5mg/dL in female. We analyzed the clinical information and laboratory data during the initial evaluation of patients referred to bariatric surgery for predictors of anemia development after surgery. All data were analyzed by using multivariate adaptive regression splines (MARS) method. Of the patients, the mean age was 30.8±8.6 years; mean BMI was 40.7±7.8kg/m2 and preoperative mean hemoglobin (Hb) was 13.7±1.5g/ dL. The prevalence of anemia increased from preoperatively 5.4% to 38.0% two years after surgery. Mean Hb was significantly lower in patients receiving gastric bypass than in restrictive type surgery (11.9mg/dL vs. 13.1mg/dL, p=0.040) two years after surgery. Besides, the preoperative optimal value of hemoglobin to predict future anemia in MARS model is 15.6mg/dL. The prevalence of anemia increased to 38.0% two years after bariatric surgery. We obtained an optimal preoperative value of hemoglobin 15.6mg/dL to predict postoperative anemia, which was important in preoperative assessment for bariatric surgery. Patients undergone gastric bypass surgery developed more severe anemia than gastric banding or sleeve gastrectomy.

  18. USING MULTIVARIATE ADAPTIVE REGRESSION SPLINE AND ARTIFICIAL NEURAL NETWORK TO SIMULATE URBANIZATION IN MUMBAI, INDIA

    Directory of Open Access Journals (Sweden)

    M. Ahmadlou

    2015-12-01

    Full Text Available Land use change (LUC models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS, and a global parametric model called artificial neural network (ANN to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM and 2010 (ETM+ were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  19. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-07-01

    Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.

  20. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    Science.gov (United States)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  1. B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Shilpa Dilipkumar

    2015-03-01

    Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.

  2. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    Science.gov (United States)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model

  3. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng

    2013-11-05

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  4. Cubic spline function interpolation in atmosphere models for the software development laboratory: Formulation and data

    Science.gov (United States)

    Kirkpatrick, J. C.

    1976-01-01

    A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.

  5. Estimates of genetic parameters for Holstein cows for test-day yield traits with a random regression cubic spline model.

    Science.gov (United States)

    DeGroot, B J; Keown, J F; Van Vleck, L D; Kachman, S D

    2007-06-30

    Genetic parameters were estimated with restricted maximum likelihood for individual test-day milk, fat, and protein yields and somatic cell scores with a random regression cubic spline model. Test-day records of Holstein cows that calved from 1994 through early 1999 were obtained from Dairy Records Management Systems in Raleigh, North Carolina, for the analysis. Estimates of heritability for individual test-days and estimates of genetic and phenotypic correlations between test-days were obtained from estimates of variances and covariances from the cubic spline analysis. Estimates were calculated of genetic parameters for the averages of the test days within each of the ten 30-day test intervals. The model included herd test-day, age at first calving, and bovine somatropin treatment as fixed factors. Cubic splines were fitted for the overall lactation curve and for random additive genetic and permanent environmental effects, with five predetermined knots or four intervals between days 0, 50, 135, 220, and 305. Estimates of heritability for lactation one ranged from 0.10 to 0.15, 0.06 to 0.10, 0.09 to 0.15, and 0.02 to 0.06 for test-day one to test-day 10 for milk, fat, and protein yields and somatic cell scores, respectively. Estimates of heritability were greater in lactations two and three. Estimates of heritability increased over the course of the lactation. Estimates of genetic and phenotypic correlations were smaller for test-days further apart.

  6. SPLINE-FUNCTIONS IN THE TASK OF THE FLOW AIRFOIL PROFILE

    Directory of Open Access Journals (Sweden)

    Mikhail Lopatjuk

    2013-12-01

    Full Text Available The method and the algorithm of solving the problem of streamlining are presented. Neumann boundary problem is reduced to the solution of integral equations with given boundary conditions using the cubic spline-functions

  7. Vibration Analysis of Suspension Cable with Attached Masses by Non-linear Spline Function Method

    Directory of Open Access Journals (Sweden)

    Qin Jian

    2016-01-01

    Full Text Available The nonlinear strain and stress expressions of suspension cable are established from the basic condition of suspension structure on the Lagrange coordinates and the equilibrium equation of the suspension structure is obtained. The dynamics equations of motion of the suspended cable with attached masses are proposed according to the virtual work principle. Using the spline function as interpolation functions of displacement and spatial position, the spline function method of dynamics equation of suspension cable is formed in which the stiffness matrix is expressed by spline function, and the solution method of stiffness matrix, matrix assembly method based on spline integral, is put forwards which can save cost time efficiency. The vibration frequency of the suspension cable is calculated with different attached masses, which provides theoretical basis for valuing of safety coefficient of the bearing cable of the cableway.

  8. Spline function fit for multi-sets of correlative data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhou Hongmo

    1992-01-01

    The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation

  9. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    Science.gov (United States)

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. The analysis of internet addiction scale using multivariate adaptive regression splines.

    Science.gov (United States)

    Kayri, M

    2010-01-01

    Determining real effects on internet dependency is too crucial with unbiased and robust statistical method. MARS is a new non-parametric method in use in the literature for parameter estimations of cause and effect based research. MARS can both obtain legible model curves and make unbiased parametric predictions. In order to examine the performance of MARS, MARS findings will be compared to Classification and Regression Tree (C&RT) findings, which are considered in the literature to be efficient in revealing correlations between variables. The data set for the study is taken from "The Internet Addiction Scale" (IAS), which attempts to reveal addiction levels of individuals. The population of the study consists of 754 secondary school students (301 female, 443 male students with 10 missing data). MARS 2.0 trial version is used for analysis by MARS method and C&RT analysis was done by SPSS. MARS obtained six base functions of the model. As a common result of these six functions, regression equation of the model was found. Over the predicted variable, MARS showed that the predictors of daily Internet-use time on average, the purpose of Internet-use, grade of students and occupations of mothers had a significant effect (Pdependency level prediction. The fact that MARS revealed extent to which the variable, which was considered significant, changes the character of the model was observed in this study.

  11. About a family of C2 splines with one free generating function

    Directory of Open Access Journals (Sweden)

    Igor Verlan

    2005-01-01

    Full Text Available The problem of interpolation of discrete set of data on the interval [a, b] representing the function f is investigated. A family of C*C splines with one free generating function is introduced in order to solve this problem. Cubic C*C splines belong to this family. The required conditions which must satisfy the generating function in order to obtain explicit interpolants are presented and examples of generating functions are given. Mathematics Subject Classification: 2000: 65D05, 65D07, 41A05, 41A15.

  12. Forecasting the daily power output of a grid-connected photovoltaic system based on multivariate adaptive regression splines

    International Nuclear Information System (INIS)

    Li, Yanting; He, Yong; Su, Yan; Shu, Lianjie

    2016-01-01

    Highlights: • Suggests a nonparametric model based on MARS for output power prediction. • Compare the MARS model with a wide variety of prediction models. • Show that the MARS model is able to provide an overall good performance in both the training and testing stages. - Abstract: Both linear and nonlinear models have been proposed for forecasting the power output of photovoltaic systems. Linear models are simple to implement but less flexible. Due to the stochastic nature of the power output of PV systems, nonlinear models tend to provide better forecast than linear models. Motivated by this, this paper suggests a fairly simple nonlinear regression model known as multivariate adaptive regression splines (MARS), as an alternative to forecasting of solar power output. The MARS model is a data-driven modeling approach without any assumption about the relationship between the power output and predictors. It maintains simplicity of the classical multiple linear regression (MLR) model while possessing the capability of handling nonlinearity. It is simpler in format than other nonlinear models such as ANN, k-nearest neighbors (KNN), classification and regression tree (CART), and support vector machine (SVM). The MARS model was applied on the daily output of a grid-connected 2.1 kW PV system to provide the 1-day-ahead mean daily forecast of the power output. The comparisons with a wide variety of forecast models show that the MARS model is able to provide reliable forecast performance.

  13. Cubic spline interpolation of functions with high gradients in boundary layers

    Science.gov (United States)

    Blatov, I. A.; Zadorin, A. I.; Kitaeva, E. V.

    2017-01-01

    The cubic spline interpolation of grid functions with high-gradient regions is considered. Uniform meshes are proved to be inefficient for this purpose. In the case of widely applied piecewise uniform Shishkin meshes, asymptotically sharp two-sided error estimates are obtained in the class of functions with an exponential boundary layer. It is proved that the error estimates of traditional spline interpolation are not uniform with respect to a small parameter, and the error can increase indefinitely as the small parameter tends to zero, while the number of nodes N is fixed. A modified cubic interpolation spline is proposed, for which O((ln N/N)4) error estimates that are uniform with respect to the small parameter are obtained.

  14. Smoothing X-ray spectra with regression splines and fast Fourier transform; Wygladzanie widm promieniowania X metodami regresyjnych funkcji sklejanych i szybkiej transformaty Fouriera

    Energy Technology Data Exchange (ETDEWEB)

    Antoniak, W.; Urbanski, P. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1996-12-31

    Regression splines and fast Fourier transform (FFT) methods were used for smoothing the X-ray spectra obtained from the proportional counters. The programs for computation and optimization of the smoothed spectra were written in MATLAB languages. It was shown, that application of the smoothed spectra in the multivariate calibration can result in a considerable reduction of measurement errors. (author). 8 refs, 9 figs.

  15. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Science.gov (United States)

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  16. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Science.gov (United States)

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  17. Modeling relationship between mean years of schooling and household expenditure at Central Sulawesi using constrained B-splines (COBS) in quantile regression

    Science.gov (United States)

    Hudoyo, Luhur Partomo; Andriyana, Yudhie; Handoko, Budhi

    2017-03-01

    Quantile regression illustrates the distribution of conditional variable responses to various quantile desired values. Each quantile characterizes a certain point (center or tail) of a conditional distribution. This analysis is very useful for asymmetric conditional distribution, e.g. solid at the tail of the distribution, the truncated distribution and existence of outliers. One approach nonparametric method of predicting the conditional quantile objective function is Constrained B-Splines (COBS). COBS is a smoothing technique to accommodate the addition of constraints such as monotonicity, convexity and periodicity. In this study, we will change the minimum conditional quantile objective function in COBS into a linear programming problem. Linear programming problem is defined as the problem of minimizing and maximizing a linear function subject to linear constraints. The constraints may be equalities or inequalities. This research will discuss the relationship between education (mean years of schooling) and economic (household expenditure) levels at Central Sulawesi Province in 2014 which household level data provide more systematic evidence on positive relationship. So monotonicity (increasing) constraints will be used in COBS quantile regression model.

  18. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  19. Numerical solution of Riccati equation using the cubic B-spline scaling functions and Chebyshev cardinal functions

    Science.gov (United States)

    Lakestani, Mehrdad; Dehghan, Mehdi

    2010-05-01

    Two numerical techniques are presented for solving the solution of Riccati differential equation. These methods use the cubic B-spline scaling functions and Chebyshev cardinal functions. The methods consist of expanding the required approximate solution as the elements of cubic B-spline scaling function or Chebyshev cardinal functions. Using the operational matrix of derivative, we reduce the problem to a set of algebraic equations. Some numerical examples are included to demonstrate the validity and applicability of the new techniques. The methods are easy to implement and produce very accurate results.

  20. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    Science.gov (United States)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  1. Multivariate Adaptative Regression Splines (MARS, una alternativa para el análisis de series de tiempo

    Directory of Open Access Journals (Sweden)

    Jairo Vanegas

    2017-05-01

    Full Text Available Multivariate Adaptative Regression Splines (MARS es un método de modelación no paramétrico que extiende el modelo lineal incorporando no linealidades e interacciones de variables. Es una herramienta flexible que automatiza la construcción de modelos de predicción, seleccionando variables relevantes, transformando las variables predictoras, tratando valores perdidos y previniendo sobreajustes mediante un autotest. También permite predecir tomando en cuenta factores estructurales que pudieran tener influencia sobre la variable respuesta, generando modelos hipotéticos. El resultado final serviría para identificar puntos de corte relevantes en series de datos. En el área de la salud es poco utilizado, por lo que se propone como una herramienta más para la evaluación de indicadores relevantes en salud pública. Para efectos demostrativos se utilizaron series de datos de mortalidad de menores de 5 años de Costa Rica en el periodo 1978-2008.

  2. Quantitative structure-activity relationship study on BTK inhibitors by modified multivariate adaptive regression spline and CoMSIA methods.

    Science.gov (United States)

    Xu, A; Zhang, Y; Ran, T; Liu, H; Lu, S; Xu, J; Xiong, X; Jiang, Y; Lu, T; Chen, Y

    2015-01-01

    Bruton's tyrosine kinase (BTK) plays a crucial role in B-cell activation and development, and has emerged as a new molecular target for the treatment of autoimmune diseases and B-cell malignancies. In this study, two- and three-dimensional quantitative structure-activity relationship (2D and 3D-QSAR) analyses were performed on a series of pyridine and pyrimidine-based BTK inhibitors by means of genetic algorithm optimized multivariate adaptive regression spline (GA-MARS) and comparative molecular similarity index analysis (CoMSIA) methods. Here, we propose a modified MARS algorithm to develop 2D-QSAR models. The top ranked models showed satisfactory statistical results (2D-QSAR: Q(2) = 0.884, r(2) = 0.929, r(2)pred = 0.878; 3D-QSAR: q(2) = 0.616, r(2) = 0.987, r(2)pred = 0.905). Key descriptors selected by 2D-QSAR were in good agreement with the conclusions of 3D-QSAR, and the 3D-CoMSIA contour maps facilitated interpretation of the structure-activity relationship. A new molecular database was generated by molecular fragment replacement (MFR) and further evaluated with GA-MARS and CoMSIA prediction. Twenty-five pyridine and pyrimidine derivatives as novel potential BTK inhibitors were finally selected for further study. These results also demonstrated that our method can be a very efficient tool for the discovery of novel potent BTK inhibitors.

  3. Natural Cubic Spline Regression Modeling Followed by Dynamic Network Reconstruction for the Identification of Radiation-Sensitivity Gene Association Networks from Time-Course Transcriptome Data.

    Science.gov (United States)

    Michna, Agata; Braselmann, Herbert; Selmansberger, Martin; Dietz, Anne; Hess, Julia; Gomolka, Maria; Hornhardt, Sabine; Blüthgen, Nils; Zitzelsberger, Horst; Unger, Kristian

    2016-01-01

    Gene expression time-course experiments allow to study the dynamics of transcriptomic changes in cells exposed to different stimuli. However, most approaches for the reconstruction of gene association networks (GANs) do not propose prior-selection approaches tailored to time-course transcriptome data. Here, we present a workflow for the identification of GANs from time-course data using prior selection of genes differentially expressed over time identified by natural cubic spline regression modeling (NCSRM). The workflow comprises three major steps: 1) the identification of differentially expressed genes from time-course expression data by employing NCSRM, 2) the use of regularized dynamic partial correlation as implemented in GeneNet to infer GANs from differentially expressed genes and 3) the identification and functional characterization of the key nodes in the reconstructed networks. The approach was applied on a time-resolved transcriptome data set of radiation-perturbed cell culture models of non-tumor cells with normal and increased radiation sensitivity. NCSRM detected significantly more genes than another commonly used method for time-course transcriptome analysis (BETR). While most genes detected with BETR were also detected with NCSRM the false-detection rate of NCSRM was low (3%). The GANs reconstructed from genes detected with NCSRM showed a better overlap with the interactome network Reactome compared to GANs derived from BETR detected genes. After exposure to 1 Gy the normal sensitive cells showed only sparse response compared to cells with increased sensitivity, which exhibited a strong response mainly of genes related to the senescence pathway. After exposure to 10 Gy the response of the normal sensitive cells was mainly associated with senescence and that of cells with increased sensitivity with apoptosis. We discuss these results in a clinical context and underline the impact of senescence-associated pathways in acute radiation response of normal

  4. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  5. Assessing time-by-covariate interactions in relative survival models using restrictive cubic spline functions.

    Science.gov (United States)

    Bolard, P; Quantin, C; Abrahamowicz, M; Esteve, J; Giorgi, R; Chadha-Boreham, H; Binquet, C; Faivre, J

    2002-01-01

    The Cox model is widely used in the evaluation of prognostic factors in clinical research. However, in population-based studies, which assess long-term survival of unselected populations, relative-survival models are often considered more appropriate. In both approaches, the validity of proportional hazards hypothesis should be evaluated. We propose a new method in which restricted cubic spline functions are employed to model time-by-covariate interactions in relative survival analyses. The method allows investigation of the shape of possible dependence of the covariate effect on time without having to specify a particular functional form. Restricted cubic spline functions allow graphing of such time-by-covariate interactions, to test formally the proportional hazards assumption, and also to test the linearity of the time-by-covariate interaction. Application of our new method to assess mortality in colon cancer provides strong evidence against the proportional hazards hypothesis, which is rejected for all prognostic factors. The results corroborate previous analyses of similar data-sets, suggesting the importance of both modelling of non-proportional hazards and relative survival approach. We also demonstrate the advantages of using restricted cubic spline functions for modelling non-proportional hazards in relative-survival analysis. The results provide new insights in the estimated impact of older age and of period of diagnosis. Using restricted cubic splines in a relative survival model allows the representation of both simple and complex patterns of changes in relative risks over time, with a single parsimonious model without a priori assumptions about the functional form of these changes.

  6. Survival estimation through the cumulative hazard function with monotone natural cubic splines.

    Science.gov (United States)

    Bantis, Leonidas E; Tsimikas, John V; Georgiou, Stelios D

    2012-07-01

    In this paper we explore the estimation of survival probabilities via a smoothed version of the survival function, in the presence of censoring. We investigate the fit of a natural cubic spline on the cumulative hazard function under appropriate constraints. Under the proposed technique the problem reduces to a restricted least squares one, leading to convex optimization. The approach taken in this paper is evaluated and compared via simulations to other known methods such as the Kaplan Meier and the logspline estimator. Our approach is easily extended to address estimation of survival probabilities in the presence of covariates when the proportional hazards model assumption holds. In this case the method is compared to a restricted cubic spline approach that involves maximum likelihood. The proposed approach can be also adjusted to accommodate left censoring.

  7. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  8. Groundwater potential mapping using C5.0, random forest, and multivariate adaptive regression spline models in GIS.

    Science.gov (United States)

    Golkarian, Ali; Naghibi, Seyed Amir; Kalantar, Bahareh; Pradhan, Biswajeet

    2018-02-17

    Ever increasing demand for water resources for different purposes makes it essential to have better understanding and knowledge about water resources. As known, groundwater resources are one of the main water resources especially in countries with arid climatic condition. Thus, this study seeks to provide groundwater potential maps (GPMs) employing new algorithms. Accordingly, this study aims to validate the performance of C5.0, random forest (RF), and multivariate adaptive regression splines (MARS) algorithms for generating GPMs in the eastern part of Mashhad Plain, Iran. For this purpose, a dataset was produced consisting of spring locations as indicator and groundwater-conditioning factors (GCFs) as input. In this research, 13 GCFs were selected including altitude, slope aspect, slope angle, plan curvature, profile curvature, topographic wetness index (TWI), slope length, distance from rivers and faults, rivers and faults density, land use, and lithology. The mentioned dataset was divided into two classes of training and validation with 70 and 30% of the springs, respectively. Then, C5.0, RF, and MARS algorithms were employed using R statistical software, and the final values were transformed into GPMs. Finally, two evaluation criteria including Kappa and area under receiver operating characteristics curve (AUC-ROC) were calculated. According to the findings of this research, MARS had the best performance with AUC-ROC of 84.2%, followed by RF and C5.0 algorithms with AUC-ROC values of 79.7 and 77.3%, respectively. The results indicated that AUC-ROC values for the employed models are more than 70% which shows their acceptable performance. As a conclusion, the produced methodology could be used in other geographical areas. GPMs could be used by water resource managers and related organizations to accelerate and facilitate water resource exploitation.

  9. Drought forecasting in eastern Australia using multivariate adaptive regression spline, least square support vector machine and M5Tree model

    Science.gov (United States)

    Deo, Ravinesh C.; Kisi, Ozgur; Singh, Vijay P.

    2017-02-01

    Drought forecasting using standardized metrics of rainfall is a core task in hydrology and water resources management. Standardized Precipitation Index (SPI) is a rainfall-based metric that caters for different time-scales at which the drought occurs, and due to its standardization, is well-suited for forecasting drought at different periods in climatically diverse regions. This study advances drought modelling using multivariate adaptive regression splines (MARS), least square support vector machine (LSSVM), and M5Tree models by forecasting SPI in eastern Australia. MARS model incorporated rainfall as mandatory predictor with month (periodicity), Southern Oscillation Index, Pacific Decadal Oscillation Index and Indian Ocean Dipole, ENSO Modoki and Nino 3.0, 3.4 and 4.0 data added gradually. The performance was evaluated with root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (r2). Best MARS model required different input combinations, where rainfall, sea surface temperature and periodicity were used for all stations, but ENSO Modoki and Pacific Decadal Oscillation indices were not required for Bathurst, Collarenebri and Yamba, and the Southern Oscillation Index was not required for Collarenebri. Inclusion of periodicity increased the r2 value by 0.5-8.1% and reduced RMSE by 3.0-178.5%. Comparisons showed that MARS superseded the performance of the other counterparts for three out of five stations with lower MAE by 15.0-73.9% and 7.3-42.2%, respectively. For the other stations, M5Tree was better than MARS/LSSVM with lower MAE by 13.8-13.4% and 25.7-52.2%, respectively, and for Bathurst, LSSVM yielded more accurate result. For droughts identified by SPI ≤ - 0.5, accurate forecasts were attained by MARS/M5Tree for Bathurst, Yamba and Peak Hill, whereas for Collarenebri and Barraba, M5Tree was better than LSSVM/MARS. Seasonal analysis revealed disparate results where MARS/M5Tree was better than LSSVM. The results highlight the

  10. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  11. Interpolation of natural cubic spline

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    1992-01-01

    Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.

  12. A function using cubic splines for the analysis of alpha-particle spectra from silicon detectors

    CERN Document Server

    Lozano, J C; Fernández, F

    2000-01-01

    A function based on the characteristics of the alpha-particle lines obtained with silicon semiconductor detectors and modified by using cubic splines is proposed to parametrize the shape of the peaks. A reduction in the number of parameters initially considered in other proposals was carried out in order to improve the stability of the optimization process. It was imposed by the boundary conditions for the cubic splines term. This function was then able to describe peaks with highly anomalous shapes with respect to those expected from this type of detector. Some criteria were implemented to correctly determine the area of the peaks and their errors. Comparisons with other well-established functions revealed excellent agreement in the final values obtained from both fits. Detailed studies on reliability of the fitting results were carried out and the application of the function is proposed. Although the aim was to correct anomalies in peak shapes, the peaks showing the expected shapes were also well fitted. Ac...

  13. The EH Interpolation Spline and Its Approximation

    Directory of Open Access Journals (Sweden)

    Jin Xie

    2014-01-01

    Full Text Available A new interpolation spline with two parameters, called EH interpolation spline, is presented in this paper, which is the extension of the standard cubic Hermite interpolation spline, and inherits the same properties of the standard cubic Hermite interpolation spline. Given the fixed interpolation conditions, the shape of the proposed splines can be adjusted by changing the values of the parameters. Also, the introduced spline could approximate to the interpolated function better than the standard cubic Hermite interpolation spline and the quartic Hermite interpolation splines with single parameter by a new algorithm.

  14. Interpolating cubic splines

    CERN Document Server

    Knott, Gary D

    2000-01-01

    A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi­ cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi­ nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...

  15. Numerical Evaluation of Arbitrary Singular Domain Integrals Using Third-Degree B-Spline Basis Functions

    Directory of Open Access Journals (Sweden)

    Jin-Xiu Hu

    2014-01-01

    Full Text Available A new approach is presented for the numerical evaluation of arbitrary singular domain integrals. In this method, singular domain integrals are transformed into a boundary integral and a radial integral which contains singularities by using the radial integration method. The analytical elimination of singularities condensed in the radial integral formulas can be accomplished by expressing the nonsingular part of the integration kernels as a series of cubic B-spline basis functions of the distance r and using the intrinsic features of the radial integral. In the proposed method, singularities involved in the domain integrals are explicitly transformed to the boundary integrals, so no singularities exist at internal points. A few numerical examples are provided to verify the correctness and robustness of the presented method.

  16. On Characterization of Quadratic Splines

    DEFF Research Database (Denmark)

    Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong

    2005-01-01

    A quadratic spline is a differentiable piecewise quadratic function. Many problems in numerical analysis and optimization literature can be reformulated as unconstrained minimizations of quadratic splines. However, only special cases of quadratic splines are studied in the existing literature...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general......., and algorithms are developed on a case by case basis. There lacks an analytical representation of a general or even a convex quadratic spline. The current paper fills this gap by providing an analytical representation of a general quadratic spline. Furthermore, for convex quadratic spline, it is shown...

  17. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  18. Improvement of the Cubic Spline Function Sets for a Synthesis of the Axial Power Distribution of a Core Protection System

    International Nuclear Information System (INIS)

    Koo, Bon-Seung; Lee, Chung-Chan; Zee, Sung-Quun

    2006-01-01

    Online digital core protection system(SCOPS) for a system-integrated modular reactor is being developed as a part of a plant protection system at KAERI. SCOPS calculates the minimum CHFR and maximum LPD based on several online measured system parameters including 3-level ex-core detector signals. In conventional ABB-CE digital power plants, cubic spline synthesis technique has been used in online calculations of the core axial power distributions using ex-core detector signals once every 1 second in CPC. In CPC, pre-determined cubic spline function sets are used depending on the characteristics of the ex-core detector responses. But this method shows an unnegligible power distribution error for the extremely skewed axial shapes by using restrictive function sets. Therefore, this paper describes the cubic spline method for the synthesis of an axial power distribution and it generates several new cubic spline function sets for the application of the core protection system, especially for the severely distorted power shapes needed reactor type

  19. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Science.gov (United States)

    Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.

    2009-01-01

    Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional

  20. Diffeomorphism Spline

    Directory of Open Access Journals (Sweden)

    Wei Zeng

    2015-04-01

    Full Text Available Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional submanifolds in the diffeomorphism space by generalizing conventional splines. According to quasiconformal geometry theorem, each diffeomorphism determines a Beltrami differential on the source surface. Inversely, the diffeomorphism is determined by its Beltrami differential with normalization conditions. Therefore, the diffeomorphism space has one-to-one correspondence to the space of a special differential form. The convex combination of Beltrami differentials is still a Beltrami differential. Therefore, the conventional spline scheme can be generalized to the Beltrami differential space and, consequently, to the diffeomorphism space. Our experiments demonstrate the efficiency and efficacy of diffeomorphism splines. The diffeomorphism spline has many potential applications, such as surface registration, tracking and animation.

  1. Application of random forest time series, support vector regression and multivariate adaptive regression splines models in prediction of snowfall (a case study of Alvand in the middle Zagros, Iran)

    Science.gov (United States)

    Hamidi, Omid; Tapak, Leili; Abbasi, Hamed; Maryanaji, Zohreh

    2017-10-01

    We have conducted a case study to investigate the performance of support vector machine, multivariate adaptive regression splines, and random forest time series methods in snowfall modeling. These models were applied to a data set of monthly snowfall collected during six cold months at Hamadan Airport sample station located in the Zagros Mountain Range in Iran. We considered monthly data of snowfall from 1981 to 2008 during the period from October/November to April/May as the training set and the data from 2009 to 2015 as the testing set. The root mean square errors (RMSE), mean absolute errors (MAE), determination coefficient (R 2), coefficient of efficiency (E%), and intra-class correlation coefficient (ICC) statistics were used as evaluation criteria. Our results indicated that the random forest time series model outperformed the support vector machine and multivariate adaptive regression splines models in predicting monthly snowfall in terms of several criteria. The RMSE, MAE, R 2, E, and ICC for the testing set were 7.84, 5.52, 0.92, 0.89, and 0.93, respectively. The overall results indicated that the random forest time series model could be successfully used to estimate monthly snowfall values. Moreover, the support vector machine model showed substantial performance as well, suggesting it may also be applied to forecast snowfall in this area.

  2. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  3. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    Science.gov (United States)

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  4. Point based interactive image segmentation using multiquadrics splines

    Science.gov (United States)

    Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna

    2017-05-01

    Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6

  5. Complex Regression Functional And Load Tests Development

    Directory of Open Access Journals (Sweden)

    Anton Andreevich Krasnopevtsev

    2015-10-01

    Full Text Available The article describes practical approaches for realization of automatized regression functional and load testing on random software-hardware complex, based on «MARSh 3.0» sample. Testing automatization is being realized for «MARSh 3.0» information security increase.

  6. Nonlinear wavelet regression function estimator for censored ...

    African Journals Online (AJOL)

    Let (Y;C;X) be a vector of random variables where Y; C and X are, respectively, the interest variable, a right censoring and a covariable (predictor). In this paper, we introduce a new nonlinear wavelet-based estimator of the regression function in the right censorship model. An asymptotic expression for the mean integrated ...

  7. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    Science.gov (United States)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  8. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    Science.gov (United States)

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (P<0.001 vs. females). Overall, our study showed that developmental exposures to low-doses of bisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  9. Modelling the trend of bovine spongiform encephalopathy prevalence in France: Use of restricted cubic spline regression in age-period-cohort models to estimate the efficiency of control measures.

    Science.gov (United States)

    Sala, Carole; Morignat, Eric; Ducrot, Christian; Calavas, Didier

    2009-07-01

    An age-period-cohort (APC) analysis was used to assess the trend in prevalence of bovine spongiform encephalopathy (BSE) in France over time in relation to the control measures adopted since onset of the epidemic. Restricted cubic regression splines were used to model the functional forms of the non-linear effects of age at screening, birth cohort and date of diagnosis of the tested animals. The data of the 2001-2007 period of surveillance was analysed using 1-year categorisation. A categorical analysis was performed as control to check the accuracy of the sets of knots in the spline models, which were selected according to the Akaike Information Criterion (AIC). Knot selection was based on a priori knowledge of the disease and the dates of implementation of the five main BSE control measures. It was assumed that disease prevalence was a function of exposure to BSE and that changes in the exposure of cattle to BSE were mainly due to the control measures. The effects of the five main control measures were discussed in relation to the trend in BSE risk for the successive birth cohorts. The six selected models confirmed that all measures participated in disease control. However, characterization of the respective effect of individual measures was not straightforward due to the very low disease prevalence, incompletely tested cohorts and probably cumulative and overlapping effects of successive measures. The ban of importation of meat and bone meal (MBM) from the UK and the ban of use of MBM in bovines were insufficient to control the epidemic. The decline in the BSE epidemic more likely originated from implementation of the ban of MBM use in all ruminants in 1994, whose effect was probably reinforced by the evolution in perception of the BSE risk following evidence of BSE transmission to humans. Finally, the respective effects of the last two measures (prohibition of the use of specific risk material in 1996 and total MBM ban in 2000) could not be characterized as

  10. Comparison of parametric, orthogonal, and spline functions to model individual lactation curves for milk yield in Canadian Holsteins

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-11-01

    Full Text Available Test day records for milk yield of 57,390 first lactation Canadian Holsteins were analyzed with a linear model that included the fixed effects of herd-test date and days in milk (DIM interval nested within age and calving season. Residuals from this model were analyzed as a new variable and fitted with a five parameter model, fourth-order Legendre polynomials, with linear, quadratic and cubic spline models with three knots. The fit of the models was rather poor, with about 30-40% of the curves showing an adjusted R-square lower than 0.20 across all models. Results underline a great difficulty in modelling individual deviations around the mean curve for milk yield. However, the Ali and Schaeffer (5 parameter model and the fourth-order Legendre polynomials were able to detect two basic shapes of individual deviations among the mean curve. Quadratic and, especially, cubic spline functions had better fitting performances but a poor predictive ability due to their great flexibility that results in an abrupt change of the estimated curve when data are missing. Parametric and orthogonal polynomials seem to be robust and affordable under this standpoint.

  11. Fit of different functions to the individual deviations in random regression test day models for milk yield in dairy cattle

    Directory of Open Access Journals (Sweden)

    L.R. Schaeffer

    2010-04-01

    Full Text Available The shape of individual deviations of milk yield for dairy cattle from the fixed part of a random regression test day model (RRTDM was investigated. Data were 53,217 TD records for milk yield of 6,229 first lactation Canadian Holsteins in Ontario. Data were fitted with a model that included the fixed effects of herd-testdate, DIM interval nested within age and season of calving. Residuals of the model were then fitted with the following functions: Ali and Schaeffer 5 parameter model, fourth-order Legendre Polynomials, and cubic spline with three, four or five knots. Result confirm the great variability of shape that can be found when individual lactation are modeled. Cubic splines gave better fitting pe4rformances although together with a marked tendency to yield aberrant estimates at the edge of the lactation trajectory.

  12. Adaptive Confidence Bands for Nonparametric Regression Functions.

    Science.gov (United States)

    Cai, T Tony; Low, Mark; Ma, Zongming

    2014-01-01

    A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models.

  13. Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...

    African Journals Online (AJOL)

    Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.

  14. Spatial Variation of Seismic B-Values of the Empirical Law of the Magnitude-Frequency Distribution from a Bayesian Approach Based On Spline (B-Spline) Function in the North Anatolian Fault Zone, North of Turkey

    Science.gov (United States)

    Türker, Tugba; Bayrak, Yusuf

    2017-12-01

    In this study, A Bayesian approach based on Spline (B-spline) function is used to estimate the spatial variations of the seismic b-values of the empirical law (G-R law) in the North Anatolian Fault Zone (NAFZ), North of Turkey. B-spline function method developed for estimation and interpolation of b-values. Spatial variations in b-values are known to reflect the stress field and can be used in earthquake hazard analysis. We proposed that b-values combined with seismicity and tectonic background. β=b*ln(10) function (the derivation of the G-R law) based on a Bayesian approach is used to estimate the b values and their standard deviations. A homogeneous instrumental catalog is used during the period 1900-2017. We divided into ten different seismic source regions based on epicenter distribution, tectonic, seismicity, faults in NAFZ. Three historical earthquakes (1343, MS = 7. 5, 1766, Ms=7.3, 1894, MS = 7. 0) are included in region 2 (Marmara Sea (Tekirdağ-Merkez-Kumburgaz-Çmarcik Basins)) where a large earthquake is expected in the near future because of a large earthquake hasn’t been observed for the instrumental period. The spatial variations in ten different seismogenic regions are estimated in NAFZ. In accordance with estimates, b-values are changed between 0.52±0.07 and 0.86±0.13. The high b values are estimated the Southern Branch of NAFZ (Edremit Fault Zones, Yenice-Gönen, Mustafa Kemal Paşa, Ulubat Faults) region, so it is related low stress. The low b values are estimated between Tokat-Erzincan region, so it is related high stress. The maps of 2D and 3D spatial variations (2D contour maps, classed post maps (a group the data into discrete classes), image maps (raster maps based on grid files), 3D wireframe (three-dimensional representations of grid files) and 3D surface) are plotted to the b-values. The spatial variations b-values can be used earthquake hazard analysis for NAFZ.

  15. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  16. Spline fitting for multi-set data

    International Nuclear Information System (INIS)

    Zhou Hongmo; Liu Renqiu; Liu Tingjin

    1987-01-01

    A spline fit method and program for multi-set data have been developed. Improvements have been made to have new functions: any order of spline as base, knot optimization and accurate calculation for error of fit value. The program has been used for practical evaluation of nuclear data

  17. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  18. Asymptotic Con dence Bands for Density and Regression Functions ...

    African Journals Online (AJOL)

    Abstract. In this paper, we obtain asymptotic confidence bands for both the density and regression functions in the framework of nonparametric estimation. Beforehand, the asymptotic behaviors in probability of the kernel estimator of the density and the Nadaraya-Watson estimator of the regression function are described ...

  19. On convexity and Schoenberg's variation diminishing splines

    International Nuclear Information System (INIS)

    Feng, Yuyu; Kozak, J.

    1992-11-01

    In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs

  20. Iterative algorithm based on a combination of vector similarity measure and B-spline functions for particle analysis in forward scattering

    Science.gov (United States)

    Wang, Tian'en; Shen, Jianqi; Lin, Chengjun

    2017-06-01

    The vector similarity measure (VSM) was recently introduced into the inverse problem for particle analysis based on forward light scattering and its modified version was proposed to adapt for multi-modal particle systems. It is found that the algorithm is stable and efficient but the extracted solutions are usually oscillatory, especially for widely distributed particle systems. In order to improve this situation, an iterative VSM method combined with cubic B-spline functions (B-VSM) is presented. Simulations and experiments show that, compared with the old versions, this modification is more robust and efficient.

  1. PSPLINE: Princeton Spline and Hermite cubic interpolation routines

    Science.gov (United States)

    McCune, Doug

    2017-10-01

    PSPLINE is a collection of Spline and Hermite interpolation tools for 1D, 2D, and 3D datasets on rectilinear grids. Spline routines give full control over boundary conditions, including periodic, 1st or 2nd derivative match, or divided difference-based boundary conditions on either end of each grid dimension. Hermite routines take the function value and derivatives at each grid point as input, giving back a representation of the function between grid points. Routines are provided for creating Hermite datasets, with appropriate boundary conditions applied. The 1D spline and Hermite routines are based on standard methods; the 2D and 3D spline or Hermite interpolation functions are constructed from 1D spline or Hermite interpolation functions in a straightforward manner. Spline and Hermite interpolation functions are often much faster to evaluate than other representations using e.g. Fourier series or otherwise involving transcendental functions.

  2. Straight-sided Spline Optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...

  3. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.

    2010-01-01

    Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.

  4. Covariance Functions and Random Regression Models in the ...

    African Journals Online (AJOL)

    ARC-IRENE

    CFs were on age of the cow expressed in months (AM) using quadratic (order three) regressions based on orthogonal (Legendre) polynomials, initially proposed by Kirkpatrick & Heckman (1989). The matrices of coefficients KG and KC (corresponding to the additive genetic and permanent environmental functions, G.

  5. Application of multivariate adaptive regression spine-assisted objective function on optimization of heat transfer rate around a cylinder

    Energy Technology Data Exchange (ETDEWEB)

    Dey, Prasenjit; Dad, Ajoy K. [Mechanical Engineering Department, National Institute of Technology, Agartala (India)

    2016-12-15

    The present study aims to predict the heat transfer characteristics around a square cylinder with different corner radii using multivariate adaptive regression splines (MARS). Further, the MARS-generated objective function is optimized by particle swarm optimization. The data for the prediction are taken from the recently published article by the present authors [P. Dey, A. Sarkar, A.K. Das, Development of GEP and ANN model to predict the unsteady forced convection over a cylinder, Neural Comput. Appl. (2015). Further, the MARS model is compared with artificial neural network and gene expression programming. It has been found that the MARS model is very efficient in predicting the heat transfer characteristics. It has also been found that MARS is more efficient than artificial neural network and gene expression programming in predicting the forced convection data, and also particle swarm optimization can efficiently optimize the heat transfer rate.

  6. Designing interactively with elastic splines

    DEFF Research Database (Denmark)

    Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie

    2018-01-01

    We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....

  7. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    Science.gov (United States)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  8. Genetic and environmental smoothing of lactation curves with cubic splines.

    Science.gov (United States)

    White, I M; Thompson, R; Brotherstone, S

    1999-03-01

    Most approaches to modeling lactation curves involve parametric curves with fixed or random coefficients. In either case, the resulting models require the specification on an underlying parametric curve. The fitting of splines represents a semiparametric approach to the problem. In the context of animal breeding, cubic smoothing splines are particularly convenient because they can be incorporated into a suitably constructed mixed model. The potential for the use of splines in modeling lactation curves is explored with a simple example, and the results are compared with those using a random regression model. The spline model provides greater flexibility at the cost of additional computation. Splines are shown to be capable of picking up features of the lactation curve that are missed by the random regression model.

  9. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  10. Non polynomial B-splines

    Science.gov (United States)

    Laksâ, Arne

    2015-11-01

    B-splines are the de facto industrial standard for surface modelling in Computer Aided design. It is comparable to bend flexible rods of wood or metal. A flexible rod minimize the energy when bending, a third degree polynomial spline curve minimize the second derivatives. B-spline is a nice way of representing polynomial splines, it connect polynomial splines to corner cutting techniques, which induces many nice and useful properties. However, the B-spline representation can be expanded to something we can call general B-splines, i.e. both polynomial and non-polynomial splines. We will show how this expansion can be done, and the properties it induces, and examples of non-polynomial B-spline.

  11. Fast function-on-scalar regression with penalized basis expansions.

    Science.gov (United States)

    Reiss, Philip T; Huang, Lei; Mennes, Maarten

    2010-01-01

    Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.

  12. A simultaneous confidence band for sparse longitudinal regression

    KAUST Repository

    Ma, Shujie

    2012-01-01

    Functional data analysis has received considerable recent attention and a number of successful applications have been reported. In this paper, asymptotically simultaneous confidence bands are obtained for the mean function of the functional regression model, using piecewise constant spline estimation. Simulation experiments corroborate the asymptotic theory. The confidence band procedure is illustrated by analyzing CD4 cell counts of HIV infected patients.

  13. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  14. Image edges detection through B-Spline filters

    International Nuclear Information System (INIS)

    Mastropiero, D.G.

    1997-01-01

    B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab

  15. quadratic spline finite element method

    Directory of Open Access Journals (Sweden)

    A. R. Bahadir

    2002-01-01

    Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.

  16. About some properties of bivariate splines with shape parameters

    Science.gov (United States)

    Caliò, F.; Marchetti, E.

    2017-07-01

    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  17. Inferring gene expression dynamics via functional regression analysis

    Directory of Open Access Journals (Sweden)

    Leng Xiaoyan

    2008-01-01

    Full Text Available Abstract Background Temporal gene expression profiles characterize the time-dynamics of expression of specific genes and are increasingly collected in current gene expression experiments. In the analysis of experiments where gene expression is obtained over the life cycle, it is of interest to relate temporal patterns of gene expression associated with different developmental stages to each other to study patterns of long-term developmental gene regulation. We use tools from functional data analysis to study dynamic changes by relating temporal gene expression profiles of different developmental stages to each other. Results We demonstrate that functional regression methodology can pinpoint relationships that exist between temporary gene expression profiles for different life cycle phases and incorporates dimension reduction as needed for these high-dimensional data. By applying these tools, gene expression profiles for pupa and adult phases are found to be strongly related to the profiles of the same genes obtained during the embryo phase. Moreover, one can distinguish between gene groups that exhibit relationships with positive and others with negative associations between later life and embryonal expression profiles. Specifically, we find a positive relationship in expression for muscle development related genes, and a negative relationship for strictly maternal genes for Drosophila, using temporal gene expression profiles. Conclusion Our findings point to specific reactivation patterns of gene expression during the Drosophila life cycle which differ in characteristic ways between various gene groups. Functional regression emerges as a useful tool for relating gene expression patterns from different developmental stages, and avoids the problems with large numbers of parameters and multiple testing that affect alternative approaches.

  18. Dynamic prediction of cumulative incidence functions by direct binomial regression.

    Science.gov (United States)

    Grand, Mia K; de Witte, Theo J M; Putter, Hein

    2018-03-25

    In recent years there have been a series of advances in the field of dynamic prediction. Among those is the development of methods for dynamic prediction of the cumulative incidence function in a competing risk setting. These models enable the predictions to be updated as time progresses and more information becomes available, for example when a patient comes back for a follow-up visit after completing a year of treatment, the risk of death, and adverse events may have changed since treatment initiation. One approach to model the cumulative incidence function in competing risks is by direct binomial regression, where right censoring of the event times is handled by inverse probability of censoring weights. We extend the approach by combining it with landmarking to enable dynamic prediction of the cumulative incidence function. The proposed models are very flexible, as they allow the covariates to have complex time-varying effects, and we illustrate how to investigate possible time-varying structures using Wald tests. The models are fitted using generalized estimating equations. The method is applied to bone marrow transplant data and the performance is investigated in a simulation study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Error bounds for two even degree tridiagonal splines

    Directory of Open Access Journals (Sweden)

    Gary W. Howell

    1990-01-01

    Full Text Available We study a C(1 parabolic and a C(2 quartic spline which are determined by solution of a tridiagonal matrix and which interpolate subinterval midpoints. In contrast to the cubic C(2 spline, both of these algorithms converge to any continuous function as the length of the largest subinterval goes to zero, regardless of “mesh ratios”. For parabolic splines, this convergence property was discovered by Marsden [1974]. The quartic spline introduced here achieves this convergence by choosing the second derivative zero at the breakpoints. Many of Marsden's bounds are substantially tightened here. We show that for functions of two or fewer coninuous derivatives the quartic spline is shown to give yet better bounds. Several of the bounds given here are optimal.

  20. Deficiencies in the Theory of Free-Knot and Variable-Knot Spline ...

    African Journals Online (AJOL)

    This paper revisits the theory and practical implementation of graduation of mortality rates using spline functions, and in particular, variable-knot cubic spline graduation. The paper contrasts the actuarial literature on free-knot splines with the mathematical literature. It finds that the practical difficulties of implementing ...

  1. Input point distribution for regular stem form spline modeling

    Directory of Open Access Journals (Sweden)

    Karel Kuželka

    2015-04-01

    Full Text Available Aim of study: To optimize an interpolation method and distribution of measured diameters to represent regular stem form of coniferous trees using a set of discrete points. Area of study: Central-Bohemian highlands, Czech Republic; a region that represents average stand conditions of production forests of Norway spruce (Picea abies [L.] Karst. in central Europe Material and methods: The accuracy of stem curves modeled using natural cubic splines from a set of measured diameters was evaluated for 85 closely measured stems of Norway spruce using five statistical indicators and compared to the accuracy of three additional models based on different spline types selected for their ability to represent stem curves. The optimal positions to measure diameters were identified using an aggregate objective function approach. Main results: The optimal positions of the input points vary depending on the properties of each spline type. If the optimal input points for each spline are used, then all spline types are able to give reasonable results with higher numbers of input points. The commonly used natural cubic spline was outperformed by other spline types. The lowest errors occur by interpolating the points using the Catmull-Rom spline, which gives accurate and unbiased volume estimates, even with only five input points. Research highlights: The study contributes to more accurate representation of stem form and therefore more accurate estimation of stem volume using data obtained from terrestrial imagery or other close-range remote sensing methods.

  2. Smoothing quadratic and cubic splines

    OpenAIRE

    Oukropcová, Kateřina

    2014-01-01

    Title: Smoothing quadratic and cubic splines Author: Kateřina Oukropcová Department: Department of Numerical Mathematics Supervisor: RNDr. Václav Kučera, Ph.D., Department of Numerical Mathematics Abstract: The aim of this bachelor thesis is to study the topic of smoothing quadratic and cubic splines on uniform partitions. First, we define the basic con- cepts in the field of splines, next we introduce interpolating splines with a focus on their minimizing properties for odd degree and quadra...

  3. Solving Buckmaster equation using cubic B-spline and cubic trigonometric B-spline collocation methods

    Science.gov (United States)

    Chanthrasuwan, Maveeka; Asri, Nur Asreenawaty Mohd; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah

    2017-08-01

    The cubic B-spline and cubic trigonometric B-spline functions are used to set up the collocation in finding solutions for the Buckmaster equation. These splines are applied as interpolating functions in the spatial dimension while the finite difference method (FDM) is used to discretize the time derivative. The Buckmaster equation is linearized using Taylor's expansion and solved using two schemes, namely Crank-Nicolson and fully implicit. The von Neumann stability analysis is carried out on the two schemes and they are shown to be conditionally stable. In order to demonstrate the capability of the schemes, some problems are solved and compared with analytical and FDM solutions. The proposed methods are found to generate more accurate results than the FDM.

  4. On the numerical stability of spline function approximations to solutions of Volterra integral equations of the second kind

    CERN Document Server

    El-Tom, M E A

    1974-01-01

    A procedure, using spine functions of degree m, deficiency k-1, for obtaining approximate solutions to nonlinear Volterra integral equations of the second kind is presented. The paper is an investigation of the numerical stability of the procedure for various values of m and k. (5 refs).

  5. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    Science.gov (United States)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  6. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  7. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  8. SPLPKG WFCMPR WFAPPX, Wilson-Fowler Spline Generator for Computer Aided Design And Manufacturing (CAD/CAM) Systems

    International Nuclear Information System (INIS)

    Fletcher, S.K.

    2002-01-01

    1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes

  9. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  10. Application of multivariate splines to discrete mathematics

    OpenAIRE

    Xu, Zhiqiang

    2005-01-01

    Using methods developed in multivariate splines, we present an explicit formula for discrete truncated powers, which are defined as the number of non-negative integer solutions of linear Diophantine equations. We further use the formula to study some classical problems in discrete mathematics as follows. First, we extend the partition function of integers in number theory. Second, we exploit the relation between the relative volume of convex polytopes and multivariate truncated powers and giv...

  11. Optimization of straight-sided spline design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...

  12. Univariate Cubic L1 Interpolating Splines: Analytical Results for Linearity, Convexity and Oscillation on 5-PointWindows

    Directory of Open Access Journals (Sweden)

    Shu-Cherng Fang

    2010-07-01

    Full Text Available We analytically investigate univariate C1 continuous cubic L1 interpolating splines calculated by minimizing an L1 spline functional based on the second derivative on 5-point windows. Specifically, we link geometric properties of the data points in the windows with linearity, convexity and oscillation properties of the resulting L1 spline. These analytical results provide the basis for a computationally efficient algorithm for calculation of L1 splines on 5-point windows.

  13. An enhanced splined saddle method

    Science.gov (United States)

    Ghasemi, S. Alireza; Goedecker, Stefan

    2011-07-01

    We present modifications for the method recently developed by Granot and Baer [J. Chem. Phys. 128, 184111 (2008)], 10.1063/1.2916716. These modifications significantly enhance the efficiency and reliability of the method. In addition, we discuss some specific features of this method. These features provide important flexibilities which are crucial for a double-ended saddle point search method in order to be applicable to complex reaction mechanisms. Furthermore, it is discussed under what circumstances this methods might fail to find the transition state and remedies to avoid such situations are provided. We demonstrate the performance of the enhanced splined saddle method on several examples with increasing complexity, isomerization of ammonia, ethane and cyclopropane molecules, tautomerization of cytosine, the ring opening of cyclobutene, the Stone-Wales transformation of the C60 fullerene, and finally rolling a small NaCl cube on NaCl(001) surface. All of these calculations are based on density functional theory. The efficiency of the method is remarkable in regard to the reduction of the total computational time.

  14. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  15. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    Energy Technology Data Exchange (ETDEWEB)

    Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz [Department of Engineering Science, The University of Auckland, Auckland 1010 (New Zealand); Boocock, Mark G.; McNair, Peter J. [Health and Rehabilitation Research Center, Auckland University of Technology, Auckland 1142 (New Zealand)

    2016-03-15

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume

  16. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    International Nuclear Information System (INIS)

    Javaid, Zarrar; Unsworth, Charles P.; Boocock, Mark G.; McNair, Peter J.

    2016-01-01

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume

  17. Possibility of using the smoothed spline functions in approximation of average course of terrain inclinations caused by underground mining exploitation conducted at medium depth

    Science.gov (United States)

    Orwat, J.

    2018-01-01

    In this paper was presented an obtainment way of the average values of terrain inclinations caused by an exploitation of the 338/2 coal bed, conducted at medium depth by four longwalls. The inclinations were measured at sections of measuring line established over the excavations, perpendicularly to their runways, after the termination of subsequent exploitation stages. The average courses of measured inclinations were calculated on the basis of average values of measured subsidence obtained as a result of an average-square approximation done by the use of smooth splines, in reference to their theoretical values calculated via the S. Knothe’s and J. Bialek’s formulas. The typical values of parameters of these formulas were used. Thus it was obtained for two average courses after the ending of each exploitation period. The values of standard deviations between average and measured inclinations σI and variability coefficients of random scattering of inclinations MI were calculated. Then they were compared with the values appearing in the literature and based on this the possibility evaluation of use smooth splines to determination of average course of observed inclinations of mining area was conducted.

  18. Weighted cubic and biharmonic splines

    Science.gov (United States)

    Kvasov, Boris; Kim, Tae-Wan

    2017-01-01

    In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.

  19. A Blossoming Development of Splines

    CERN Document Server

    Mann, Stephen

    2006-01-01

    In this lecture, we study Bezier and B-spline curves and surfaces, mathematical representations for free-form curves and surfaces that are common in CAD systems and are used to design aircraft and automobiles, as well as in modeling packages used by the computer animation industry. Bezier/B-splines represent polynomials and piecewise polynomials in a geometric manner using sets of control points that define the shape of the surface. The primary analysis tool used in this lecture is blossoming, which gives an elegant labeling of the control points that allows us to analyze their properties geom

  20. Symmetric, discrete fractional splines and Gabor systems

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2006-01-01

    In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....

  1. PEMODELAN B-SPLINE DAN MARS PADA NILAI UJIAN MASUK TERHADAP IPK MAHASISWA JURUSAN DISAIN KOMUNIKASI VISUAL UK. PETRA SURABAYA

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2006-01-01

    Full Text Available Regression analysis is constructed for capturing the influences of independent variables to dependent ones. It can be done by looking at the relationship between those variables. This task of approximating the mean function can be done essentially in two ways. The quiet often use parametric approach is to assume that the mean curve has some prespecified functional forms. Alternatively, nonparametric approach, .i.e., without reference to a specific form, is used when there is no information of the regression function form (Haerdle, 1990. Therefore nonparametric approach has more flexibilities than the parametric one. The aim of this research is to find the best fit model that captures relationship between admission test score to the GPA. This particular data was taken from the Department of Design Communication and Visual, Petra Christian University, Surabaya for year 1999. Those two approaches were used here. In the parametric approach, we use simple linear, quadric cubic regression, and in the nonparametric ones, we use B-Spline and Multivariate Adaptive Regression Splines (MARS. Overall, the best model was chosen based on the maximum determinant coefficient. However, for MARS, the best model was chosen based on the GCV, minimum MSE, maximum determinant coefficient. Abstract in Bahasa Indonesia : Analisa regresi digunakan untuk melihat pengaruh variabel independen terhadap variabel dependent dengan terlebih dulu melihat pola hubungan variabel tersebut. Hal ini dapat dilakukan dengan melalui dua pendekatan. Pendekatan yang paling umum dan seringkali digunakan adalah pendekatan parametrik. Pendekatan parametrik mengasumsikan bentuk model sudah ditentukan. Apabila tidak ada informasi apapun tentang bentuk dari fungsi regresi, maka pendekatan yang digunakan adalah pendekatan nonparametrik. (Haerdle, 1990. Karena pendekatan tidak tergantung pada asumsi bentuk kurva tertentu, sehingga memberikan fleksibelitas yang lebih besar. Tujuan penelitian ini

  2. Numerical Methods Using B-Splines

    Science.gov (United States)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  3. Isogeometric analysis using T-splines

    KAUST Repository

    Bazilevs, Yuri

    2010-01-01

    We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.

  4. Density Deconvolution With EPI Splines

    Science.gov (United States)

    2015-09-01

    Comparison of Deconvolution Methods . . . . . . . . . . . . . . . 28 5 High-Fidelity and Low-Fidelity Simulation Output 31 5.1 Hydrofoil Concept...46 A.3 Hydrofoil Concept . . . . . . . . . . . . . . . . . . . . . . . . 47 A.4 Notes on Computation Time...Epi-Spline Estimates . . . . . . . . . . . 28 Figure 4.3 Deconvolution Method Comparison . . . . . . . . . . . . . . . . 29 Figure 5.1 Hydrofoil

  5. Shape preserving rational cubic spline for positive and convex data

    Directory of Open Access Journals (Sweden)

    Malik Zawwar Hussain

    2011-11-01

    Full Text Available In this paper, the problem of shape preserving C2 rational cubic spline has been proposed. The shapes of the positive and convex data are under discussion of the proposed spline solutions. A C2 rational cubic function with two families of free parameters has been introduced to attain the C2 positive curves from positive data and C2 convex curves from convex data. Simple data dependent constraints are derived on free parameters in the description of rational cubic function to obtain the desired shape of the data. The rational cubic schemes have unique representations.

  6. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  7. Detection of Differential Item Functioning with Nonlinear Regression: A Non-IRT Approach Accounting for Guessing

    Czech Academy of Sciences Publication Activity Database

    Drabinová, Adéla; Martinková, Patrícia

    2017-01-01

    Roč. 54, č. 4 (2017), s. 498-517 ISSN 0022-0655 R&D Projects: GA ČR GJ15-15856Y Institutional support: RVO:67985807 Keywords : differential item functioning * non- linear regression * logistic regression * item response theory Subject RIV: AM - Education OBOR OECD: Statistics and probability Impact factor: 0.979, year: 2016

  8. Detection of Differential Item Functioning with Nonlinear Regression: A Non-IRT Approach Accounting for Guessing

    Science.gov (United States)

    Drabinová, Adéla; Martinková, Patrícia

    2017-01-01

    In this article we present a general approach not relying on item response theory models (non-IRT) to detect differential item functioning (DIF) in dichotomous items with presence of guessing. The proposed nonlinear regression (NLR) procedure for DIF detection is an extension of method based on logistic regression. As a non-IRT approach, NLR can…

  9. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  10. Comparison of fractional splines with polynomial splines; An Application on under-five year’s child mortality data in Pakistan (1960-2012

    Directory of Open Access Journals (Sweden)

    Saira Esar Esar

    2017-06-01

    Full Text Available Cubic splines are commonly used for capturing the changes in economic analysis. This is because of the fact that traditional regression including polynomial regression fail to capture the underlying changes in the corresponding response variables. Moreover, these variables do not change monotonically, i.e. there are discontinuities in the trend of these variables over a period of time. The objective of this research is to explain the movement of under-five child mortality in Pakistan over the past few decades through a combination of statistical techniques. While cubic splines explain the movement of under-five child mortality to a large extent, we cannot deny the possibility that splines with fractional powers might better explain the underlying movement. . Hence, we estimated the value of fractional power by nonlinear regression method and used it to develop the fractional splines. Although, the fractional spline model may have the potential to improve upon the cubic spline model, it does not demonstrate a real improvement in results of this case, but, perhaps, with a different data set.

  11. Solving Dym equation using quartic B-spline and quartic trigonometric B-spline collocation methods

    Science.gov (United States)

    Anuar, Hanis Safirah Saiful; Mafazi, Nur Hidayah; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah

    2017-08-01

    The nonlinear Dym equation is solved numerically using the quartic B-spline (QuBS) and quartic trigonometric B-spline (QuTBS) collocation methods. The QuBS and QuTBS are utilized as interpolating functions in the spatial dimension while the finite difference method (FDM) is applied to discretize the temporal space with the help of theta-weighted method. The nonlinear term in the Dym equation is linearized using Taylor's expansion. Two schemes are performed on both methods which are Crank-Nicolson and fully implicit. Applying the Von-Neumann stability analysis, these schemes are found to be conditionally stable. Several numerical examples of different forms are discussed and compared in term of errors with exact solutions and results from the FDM.

  12. B-spline solution of a singularly perturbed boundary value problem arising in biology

    International Nuclear Information System (INIS)

    Lin Bin; Li Kaitai; Cheng Zhengxing

    2009-01-01

    We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.

  13. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    Science.gov (United States)

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  14. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  15. Analysis of Functional Data with Focus on Multinomial Regression and Multilevel Data

    DEFF Research Database (Denmark)

    Mousavi, Seyed Nourollah

    Functional data analysis (FDA) is a fast growing area in statistical research with increasingly diverse range of application from economics, medicine, agriculture, chemometrics, etc. Functional regression is an area of FDA which has received the most attention both in aspects of application...... and methodological development. Our main Functional data analysis (FDA) is a fast growing area in statistical research with increasingly diverse range of application from economics, medicine, agriculture, chemometrics, etc. Functional regression is an area of FDA which has received the most attention both in aspects...

  16. A cubic spline approximation for problems in fluid mechanics

    Science.gov (United States)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  17. Viscous flow solutions with a cubic spline approximation

    Science.gov (United States)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is used for the solution of several problems in fluid mechanics. This procedure provides a high degree of accuracy even with a nonuniform mesh, and leads to a more accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several typical integration schemes are presented. For two-dimensional flows a spline-alternating-direction-implicit (SADI) method is evaluated. The spline procedure is assessed and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  18. Spline and spline wavelet methods with applications to signal and image processing

    CERN Document Server

    Averbuch, Amir Z; Zheludev, Valery A

    This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...

  19. Solution of higher order boundary value problems by spline methods

    Science.gov (United States)

    Chaurasia, Anju; Srivastava, P. C.; Gupta, Yogesh

    2017-10-01

    Spline solution of Boundary Value Problems has received much attention in recent years. It has proven to be a powerful tool due to the ease of use and quality of results. This paper concerns with the survey of methods that try to approximate the solution of higher order BVPs using various spline functions. The purpose of this article is to thrash out the problems as well as conclusions, reached by the numerous authors in the related field. We critically assess many important relevant papers, published in reputed journals during last six years.

  20. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    Science.gov (United States)

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.

  1. A Factor Analytic and Regression Approach to Functional Age: Potential Effects of Race.

    Science.gov (United States)

    Colquitt, Alan L.; And Others

    Factor analysis and multiple regression are two major approaches used to look at functional age, which takes account of the extensive variation in the rate of physiological and psychological maturation throughout life. To examine the role of racial or cultural influences on the measurement of functional age, a battery of 12 tests concentrating on…

  2. A Generalized Logistic Regression Procedure to Detect Differential Item Functioning among Multiple Groups

    Science.gov (United States)

    Magis, David; Raiche, Gilles; Beland, Sebastien; Gerard, Paul

    2011-01-01

    We present an extension of the logistic regression procedure to identify dichotomous differential item functioning (DIF) in the presence of more than two groups of respondents. Starting from the usual framework of a single focal group, we propose a general approach to estimate the item response functions in each group and to test for the presence…

  3. Characterizing vaccine-associated risks using cubic smoothing splines.

    Science.gov (United States)

    Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W

    2012-11-15

    Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.

  4. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    preferred which preserves some of the characteristics of the function to be interpolated. In order to tackle such ... Shape preserving properties of the rational (cubic/quadratic) spline interpolant have been studied ... tension parameters which is used to interpolate the given monotonic data is described in. [6]. Shape preserving ...

  5. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    In the present paper, 1-piecewise rational cubic spline function involving tension parameters is considered which produces a monotonic interpolant to a given monotonic data set. It is observed that under certain conditions the interpolant preserves the convexity property of the data set. The existence and uniqueness of a ...

  6. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template

    Directory of Open Access Journals (Sweden)

    Rachel Jewkes

    2018-02-01

    Full Text Available The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries.

  7. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  8. Segmented Regression Based on Cut-off Polynomials

    Directory of Open Access Journals (Sweden)

    Miloš Kaňka

    2016-06-01

    Full Text Available In Statistika: Statistics and Economy Journal No. 4/2015 (pp. 39–58, author’s paper Segmented Regression Based on B-splines with Solved Examples was published. Use of B-spline basis functions has many advantages, the most important being a special form of matrix of system of normal equations suitable for quick solution of this system. The subject of this paper is to explain how that segmented regression can be mathematically developer in other way, which doesn’t require the knowledge of relatively complicated theory of B-spline basis functions, but is based on simpler apparatus of cut-off polynomials. The author considers a detailed calculation of matrix of system of normal equations elements and elaboration of so called polygonal method, as his contribution to issues of segmented regression. This method can be used to automatically obtain required values of nodal points. Author pays major attention to computing elements of matrix of systém of normal equations, which he also developed as computer program called TRIO.

  9. Quadrotor system identification using the multivariate multiplex b-spline

    NARCIS (Netherlands)

    Visser, T.; De Visser, C.C.; Van Kampen, E.J.

    2015-01-01

    A novel method for aircraft system identification is presented that is based on a new multivariate spline type; the multivariate multiplex B-spline. The multivariate multiplex B-spline is a generalization of the recently introduced tensor-simplex B-spline. Multivariate multiplex splines obtain

  10. IMPROVING CORRELATION FUNCTION FITTING WITH RIDGE REGRESSION: APPLICATION TO CROSS-CORRELATION RECONSTRUCTION

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Daniel J.; Newman, Jeffrey A., E-mail: djm70@pitt.edu, E-mail: janewman@pitt.edu [Department of Physics and Astronomy, University of Pittsburgh, 3941 O' Hara Street, Pittsburgh, PA 15260 (United States)

    2012-02-01

    Cross-correlation techniques provide a promising avenue for calibrating photometric redshifts and determining redshift distributions using spectroscopy which is systematically incomplete (e.g., current deep spectroscopic surveys fail to obtain secure redshifts for 30%-50% or more of the galaxies targeted). In this paper, we improve on the redshift distribution reconstruction methods from our previous work by incorporating full covariance information into our correlation function fits. Correlation function measurements are strongly covariant between angular or spatial bins, and accounting for this in fitting can yield substantial reduction in errors. However, frequently the covariance matrices used in these calculations are determined from a relatively small set (dozens rather than hundreds) of subsamples or mock catalogs, resulting in noisy covariance matrices whose inversion is ill-conditioned and numerically unstable. We present here a method of conditioning the covariance matrix known as ridge regression which results in a more well behaved inversion than other techniques common in large-scale structure studies. We demonstrate that ridge regression significantly improves the determination of correlation function parameters. We then apply these improved techniques to the problem of reconstructing redshift distributions. By incorporating full covariance information, applying ridge regression, and changing the weighting of fields in obtaining average correlation functions, we obtain reductions in the mean redshift distribution reconstruction error of as much as {approx}40% compared to previous methods. We provide a description of POWERFIT, an IDL code for performing power-law fits to correlation functions with ridge regression conditioning that we are making publicly available.

  11. Construction of local integro quintic splines

    Directory of Open Access Journals (Sweden)

    T. Zhanlav

    2016-06-01

    Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.

  12. Modelling Childhood Growth Using Fractional Polynomials and Linear Splines

    Science.gov (United States)

    Tilling, Kate; Macdonald-Wallis, Corrie; Lawlor, Debbie A.; Hughes, Rachael A.; Howe, Laura D.

    2014-01-01

    Background There is increasing emphasis in medical research on modelling growth across the life course and identifying factors associated with growth. Here, we demonstrate multilevel models for childhood growth either as a smooth function (using fractional polynomials) or a set of connected linear phases (using linear splines). Methods We related parental social class to height from birth to 10 years of age in 5,588 girls from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multilevel fractional polynomial modelling identified the best-fitting model as being of degree 2 with powers of the square root of age, and the square root of age multiplied by the log of age. The multilevel linear spline model identified knot points at 3, 12 and 36 months of age. Results Both the fractional polynomial and linear spline models show an initially fast rate of growth, which slowed over time. Both models also showed that there was a disparity in length between manual and non-manual social class infants at birth, which decreased in magnitude until approximately 1 year of age and then increased. Conclusions Multilevel fractional polynomials give a more realistic smooth function, and linear spline models are easily interpretable. Each can be used to summarise individual growth trajectories and their relationships with individual-level exposures. PMID:25413651

  13. BFLCRM: A BAYESIAN FUNCTIONAL LINEAR COX REGRESSION MODEL FOR PREDICTING TIME TO CONVERSION TO ALZHEIMER'S DISEASE.

    Science.gov (United States)

    Lee, Eunjee; Zhu, Hongtu; Kong, Dehan; Wang, Yalin; Giovanello, Kelly Sullivan; Ibrahim, Joseph G

    2015-12-01

    The aim of this paper is to develop a Bayesian functional linear Cox regression model (BFLCRM) with both functional and scalar covariates. This new development is motivated by establishing the likelihood of conversion to Alzheimer's disease (AD) in 346 patients with mild cognitive impairment (MCI) enrolled in the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) and the early markers of conversion. These 346 MCI patients were followed over 48 months, with 161 MCI participants progressing to AD at 48 months. The functional linear Cox regression model was used to establish that functional covariates including hippocampus surface morphology and scalar covariates including brain MRI volumes, cognitive performance (ADAS-Cog), and APOE status can accurately predict time to onset of AD. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. A simulation study is performed to evaluate the finite sample performance of BFLCRM.

  14. Spline methods for conversation equations

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs

  15. Comparative study of biodegradability prediction of chemicals using decision trees, functional trees, and logistic regression.

    Science.gov (United States)

    Chen, Guangchao; Li, Xuehua; Chen, Jingwen; Zhang, Ya-Nan; Peijnenburg, Willie J G M

    2014-12-01

    Biodegradation is the principal environmental dissipation process of chemicals. As such, it is a dominant factor determining the persistence and fate of organic chemicals in the environment, and is therefore of critical importance to chemical management and regulation. In the present study, the authors developed in silico methods assessing biodegradability based on a large heterogeneous set of 825 organic compounds, using the techniques of the C4.5 decision tree, the functional inner regression tree, and logistic regression. External validation was subsequently carried out by 2 independent test sets of 777 and 27 chemicals. As a result, the functional inner regression tree exhibited the best predictability with predictive accuracies of 81.5% and 81.0%, respectively, on the training set (825 chemicals) and test set I (777 chemicals). Performance of the developed models on the 2 test sets was subsequently compared with that of the Estimation Program Interface (EPI) Suite Biowin 5 and Biowin 6 models, which also showed a better predictability of the functional inner regression tree model. The model built in the present study exhibits a reasonable predictability compared with existing models while possessing a transparent algorithm. Interpretation of the mechanisms of biodegradation was also carried out based on the models developed. © 2014 SETAC.

  16. Testing for cubic smoothing splines under dependent data.

    Science.gov (United States)

    Nummi, Tapio; Pan, Jianxin; Siren, Tarja; Liu, Kun

    2011-09-01

    In most research on smoothing splines the focus has been on estimation, while inference, especially hypothesis testing, has received less attention. By defining design matrices for fixed and random effects and the structure of the covariance matrices of random errors in an appropriate way, the cubic smoothing spline admits a mixed model formulation, which places this nonparametric smoother firmly in a parametric setting. Thus nonlinear curves can be included with random effects and random coefficients. The smoothing parameter is the ratio of the random-coefficient and error variances and tests for linear regression reduce to tests for zero random-coefficient variances. We propose an exact F-test for the situation and investigate its performance in a real pine stem data set and by simulation experiments. Under certain conditions the suggested methods can also be applied when the data are dependent. © 2010, The International Biometric Society.

  17. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  18. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

  19. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W.; Fayers, Peter M.; Aaronson, Neil K.

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise ...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

  20. Differential item functioning analysis with ordinal logistic regression techniques. DIFdetect and difwithpar.

    Science.gov (United States)

    Crane, Paul K; Gibbons, Laura E; Jolley, Lance; van Belle, Gerald

    2006-11-01

    We present an ordinal logistic regression model for identification of items with differential item functioning (DIF) and apply this model to a Mini-Mental State Examination (MMSE) dataset. We employ item response theory ability estimation in our models. Three nested ordinal logistic regression models are applied to each item. Model testing begins with examination of the statistical significance of the interaction term between ability and the group indicator, consistent with nonuniform DIF. Then we turn our attention to the coefficient of the ability term in models with and without the group term. If including the group term has a marked effect on that coefficient, we declare that it has uniform DIF. We examined DIF related to language of test administration in addition to self-reported race, Hispanic ethnicity, age, years of education, and sex. We used PARSCALE for IRT analyses and STATA for ordinal logistic regression approaches. We used an iterative technique for adjusting IRT ability estimates on the basis of DIF findings. Five items were found to have DIF related to language. These same items also had DIF related to other covariates. The ordinal logistic regression approach to DIF detection, when combined with IRT ability estimates, provides a reasonable alternative for DIF detection. There appear to be several items with significant DIF related to language of test administration in the MMSE. More attention needs to be paid to the specific criteria used to determine whether an item has DIF, not just the technique used to identify DIF.

  1. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression.

    Science.gov (United States)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Gundy, Chad; Koller, Michael; Petersen, Morten A; Sprangers, Mirjam A G

    2010-08-04

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application. A review of logistic regression DIF analyses in HRQoL was undertaken. Methodological articles from other fields and using other DIF methods were also included if considered relevant. There are many competing approaches for the conduct of DIF analyses and many criteria for determining what constitutes significant DIF. DIF in short scales, as commonly found in HRQL instruments, may be more difficult to interpret. Qualitative methods may aid interpretation of such DIF analyses. A number of methodological choices must be made when applying logistic regression for DIF analyses, and many of these affect the results. We provide recommendations based on reviewing the current evidence. Although the focus is on logistic regression, many of our results should be applicable to DIF analyses in general. There is a need for more empirical and theoretical work in this area.

  2. Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline

    Directory of Open Access Journals (Sweden)

    Ravi Kanth A.S.V.

    2016-01-01

    Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.

  3. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  4. Data interpolation using rational cubic Ball spline with three parameters

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul

    2016-11-01

    Data interpolation is an important task for scientific visualization. This research introduces new rational cubic Ball spline scheme with three parameters. The rational cubic Ball will be used for data interpolation with or without true derivative values. Error estimation show that the proposed scheme works well and is a very good interpolant to approximate the function. All graphical examples are presented by using Mathematica software.

  5. Smoothing noisy spectroscopic data with many-knot spline method

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, M.H. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)], E-mail: peter_zu@163.com; Liu, L.G.; Qi, D.X.; You, Z.; Xu, A.A. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)

    2008-05-15

    In this paper, we present the development of a many-knot spline method derived to remove the statistical noise in the spectroscopic data. This method is an expansion of the B-spline method. Compared to the B-spline method, the many-knot spline method is significantly faster.

  6. Optimal Approximation of Biquartic Polynomials by Bicubic Splines

    Directory of Open Access Journals (Sweden)

    Kačala Viliam

    2018-01-01

    The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.

  7. Reference Function Based Spatiotemporal Fuzzy Logic Control Design Using Support Vector Regression Learning

    Directory of Open Access Journals (Sweden)

    Xian-Xia Zhang

    2013-01-01

    Full Text Available This paper presents a reference function based 3D FLC design methodology using support vector regression (SVR learning. The concept of reference function is introduced to 3D FLC for the generation of 3D membership functions (MF, which enhance the capability of the 3D FLC to cope with more kinds of MFs. The nonlinear mathematical expression of the reference function based 3D FLC is derived, and spatial fuzzy basis functions are defined. Via relating spatial fuzzy basis functions of a 3D FLC to kernel functions of an SVR, an equivalence relationship between a 3D FLC and an SVR is established. Therefore, a 3D FLC can be constructed using the learned results of an SVR. Furthermore, the universal approximation capability of the proposed 3D fuzzy system is proven in terms of the finite covering theorem. Finally, the proposed method is applied to a catalytic packed-bed reactor and simulation results have verified its effectiveness.

  8. Spline interpolations besides wood model widely used in lactation

    Science.gov (United States)

    Korkmaz, Mehmet

    2017-04-01

    In this study, for lactation curve, spline interpolations, alternative modeling passing through exactly all data points with respect to widely used Wood model applied to lactation data were be discussed. These models are linear spline, quadratic spline and cubic spline. The observed and estimated values according to spline interpolations and Wood model were given with their Error Sum of Squares and also the lactation curves of spline interpolations and widely used Wood model were shown on the same graph. Thus, the differences have been observed. The estimates for some intermediate values were done by using spline interpolations and Wood model. By using spline interpolations, the estimates of intermediate values could be made more precise. Furthermore, by using spline interpolations, the predicted values for missing or incorrect observation were very successful according to the values of Wood model. By using spline interpolations, new ideas and interpretations in addition to the information of the well-known classical analysis were shown to the investigators.

  9. Numerical solution of system of boundary value problems using B-spline with free parameter

    Science.gov (United States)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  10. Spline Variational Theory for Composite Bolted Joints

    National Research Council Canada - National Science Library

    Iarve, E

    1997-01-01

    .... Two approaches were implemented. A conventional mesh overlay method in the crack region to satisfy the crack face boundary conditions and a novel spline basis partitioning method were compared...

  11. Cortical localization of cognitive function by regression of performance on event-related potentials.

    Science.gov (United States)

    Montgomery, R W; Montgomery, L D; Guisado, R

    1992-10-01

    This paper demonstrates a new method of mapping cortical localization of cognitive function, using electroencephalographic (EEG) data. Cross-subject regression analyses are used to identify cortical sites and post-stimulus latencies where there is a high correlation between subjects' performance and their cognitive event-related potential (ERP) amplitude. The procedure was tested using a mental arithmetic task and was found to identify essentially the same cortical regions that have been associated with such tasks on the basis of research with patients suffering localized cortical lesions. Thus, it appears to offer an inexpensive, noninvasive tool for exploring the dynamics of localization in neurologically normal subjects.

  12. Cortical localization of cognitive function by regression of performance on event-related potentials

    Science.gov (United States)

    Montgomery, R. W.; Montgomery, L. D.; Guisado, R.

    1992-01-01

    This paper demonstrates a new method of mapping cortical localization of cognitive function, using electroencephalographic data. Cross-subject regression analyses are used to identify cortical sites and post-stimulus latencies where there is a high correlation between subjects' performance and their cognitive event-related potential amplitude. The procedure was tested using a mental arithmetic task and was found to identify essentially the same cortical regions that have been associated with such tasks on the basis of research with patients suffering localized cortical lesions. Thus, it appears to offer an inexpensive, noninvasive tool for exploring the dynamics of localization in neurologically normal subjects.

  13. LINKING LUNG AIRWAY STRUCTURE TO PULMONARY FUNCTION VIA COMPOSITE BRIDGE REGRESSION.

    Science.gov (United States)

    Chen, Kun; Hoffman, Eric A; Seetharaman, Indu; Jiao, Feiran; Lin, Ching-Long; Chan, Kung-Sik

    2016-12-01

    The human lung airway is a complex inverted tree-like structure. Detailed airway measurements can be extracted from MDCT-scanned lung images, such as segmental wall thickness, airway diameter, parent-child branch angles, etc. The wealth of lung airway data provides a unique opportunity for advancing our understanding of the fundamental structure-function relationships within the lung. An important problem is to construct and identify important lung airway features in normal subjects and connect these to standardized pulmonary function test results such as FEV1%. Among other things, the problem is complicated by the fact that a particular airway feature may be an important (relevant) predictor only when it pertains to segments of certain generations. Thus, the key is an efficient, consistent method for simultaneously conducting group selection (lung airway feature types) and within-group variable selection (airway generations), i.e., bi-level selection. Here we streamline a comprehensive procedure to process the lung airway data via imputation, normalization, transformation and groupwise principal component analysis, and then adopt a new composite penalized regression approach for conducting bi-level feature selection. As a prototype of composite penalization, the proposed composite bridge regression method is shown to admit an efficient algorithm, enjoy bi-level oracle properties, and outperform several existing methods. We analyze the MDCT lung image data from a cohort of 132 subjects with normal lung function. Our results show that, lung function in terms of FEV1% is promoted by having a less dense and more homogeneous lung comprising an airway whose segments enjoy more heterogeneity in wall thicknesses, larger mean diameters, lumen areas and branch angles. These data hold the potential of defining more accurately the "normal" subject population with borderline atypical lung functions that are clearly influenced by many genetic and environmental factors.

  14. NMDA Receptor Regulation Prevents Regression of Visual Cortical Function in the Absence of Mecp2

    Science.gov (United States)

    Durand, Severine; Patrizi, Annarita; Quast, Kathleen B.; Hachigian, Lea; Pavlyuk, Roman; Saxena, Alka; Carninci, Piero; Hensch, Takao K.; Fagiolini, Michela

    2012-01-01

    SUMMARY Brain function is shaped by postnatal experience and vulnerable to disruption of Methyl-CpG-binding protein, Mecp2, in multiple neurodevelopmental disorders. How Mecp2 contributes to the experience-dependent refinement of specific cortical circuits and their impairment remains unknown. We analyzed vision in gene-targeted mice and observed an initial normal development in the absence of Mecp2. Visual acuity then rapidly regressed after postnatal day P35–40 and cortical circuits largely fell silent by P55-60. Enhanced inhibitory gating and an excess of parvalbumin-positive, perisomatic input preceded the loss of vision. Both cortical function and inhibitory hyperconnectivity were strikingly rescued independent of Mecp2 by early sensory deprivation or genetic deletion of the excitatory NMDA receptor subunit, NR2A. Thus, vision is a sensitive biomarker of progressive cortical dysfunction and may guide novel, circuit-based therapies for Mecp2 deficiency. PMID:23259945

  15. Functional Unfold Principal Component Regression Methodology for Analysis of Industrial Batch Process Data

    DEFF Research Database (Denmark)

    Mears, Lisa; Nørregaard, Rasmus; Sin, Gürkan

    2016-01-01

    This work proposes a methodology utilizing functional unfold principal component regression (FUPCR), for application to industrial batch process data as a process modeling and optimization tool. The methodology is applied to an industrial fermentation dataset, containing 30 batches of a production...... process operating at Novozymes A/S. Following the FUPCR methodology, the final product concentration could be predicted with an average prediction error of 7.4%. Multiple iterations of preprocessing were applied by implementing the methodology to identify the best data handling methods for the model....... It is shown that application of functional data analysis and the choice of variance scaling method have the greatest impact on the prediction accuracy. Considering the vast amount of batch process data continuously generated in industry, this methodology can potentially contribute as a tool to identify...

  16. Solving nonlinear Benjamin-Bona-Mahony equation using cubic B-spline and cubic trigonometric B-spline collocation methods

    Science.gov (United States)

    Rahan, Nur Nadiah Mohd; Ishak, Siti Noor Shahira; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah

    2017-04-01

    In this research, the nonlinear Benjamin-Bona-Mahony (BBM) equation is solved numerically using the cubic B-spline (CuBS) and cubic trigonometric B-spline (CuTBS) collocation methods. The CuBS and CuTBS are utilized as interpolating functions in the spatial dimension while the standard finite difference method (FDM) is applied to discretize the temporal space. In order to solve the nonlinear problem, the BBM equation is linearized using Taylor's expansion. Applying the von-Neumann stability analysis, the proposed techniques are shown to be unconditionally stable under the Crank-Nicolson scheme. Several numerical examples are discussed and compared with exact solutions and results from the FDM.

  17. Lectures on constructive approximation Fourier, spline, and wavelet methods on the real line, the sphere, and the ball

    CERN Document Server

    Michel, Volker

    2013-01-01

    Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball focuses on spherical problems as they occur in the geosciences and medical imaging. It comprises the author’s lectures on classical approximation methods based on orthogonal polynomials and selected modern tools such as splines and wavelets. Methods for approximating functions on the real line are treated first, as they provide the foundations for the methods on the sphere and the ball and are useful for the analysis of time-dependent (spherical) problems. The author then examines the transfer of these spherical methods to problems on the ball, such as the modeling of the Earth’s or the brain’s interior. Specific topics covered include: * the advantages and disadvantages of Fourier, spline, and wavelet methods * theory and numerics of orthogonal polynomials on intervals, spheres, and balls * cubic splines and splines based on reproducing kernels * multiresolution analysis using wavelet...

  18. Regression analysis of the structure function for reliability evaluation of continuous-state system

    Energy Technology Data Exchange (ETDEWEB)

    Gamiz, M.L., E-mail: mgamiz@ugr.e [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain); Martinez Miranda, M.D. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)

    2010-02-15

    Technical systems are designed to perform an intended task with an admissible range of efficiency. According to this idea, it is permissible that the system runs among different levels of performance, in addition to complete failure and the perfect functioning one. As a consequence, reliability theory has evolved from binary-state systems to the most general case of continuous-state system, in which the state of the system changes over time through some interval on the real number line. In this context, obtaining an expression for the structure function becomes difficult, compared to the discrete case, with difficulty increasing as the number of components of the system increases. In this work, we propose a method to build a structure function for a continuum system by using multivariate nonparametric regression techniques, in which certain analytical restrictions on the variable of interest must be taken into account. Once the structure function is obtained, some reliability indices of the system are estimated. We illustrate our method via several numerical examples.

  19. Logistic regression function for detection of suspicious performance during baseline evaluations using concussion vital signs.

    Science.gov (United States)

    Hill, Benjamin David; Womble, Melissa N; Rohling, Martin L

    2015-01-01

    This study utilized logistic regression to determine whether performance patterns on Concussion Vital Signs (CVS) could differentiate known groups with either genuine or feigned performance. For the embedded measure development group (n = 174), clinical patients and undergraduate students categorized as feigning obtained significantly lower scores on the overall test battery mean for the CVS, Shipley-2 composite score, and California Verbal Learning Test-Second Edition subtests than did genuinely performing individuals. The final full model of 3 predictor variables (Verbal Memory immediate hits, Verbal Memory immediate correct passes, and Stroop Test complex reaction time correct) was significant and correctly classified individuals in their known group 83% of the time (sensitivity = .65; specificity = .97) in a mixed sample of young-adult clinical cases and simulators. The CVS logistic regression function was applied to a separate undergraduate college group (n = 378) that was asked to perform genuinely and identified 5% as having possibly feigned performance indicating a low false-positive rate. The failure rate was 11% and 16% at baseline cognitive testing in samples of high school and college athletes, respectively. These findings have particular relevance given the increasing use of computerized test batteries for baseline cognitive testing and return-to-play decisions after concussion.

  20. Use of multilevel logistic regression to identify the causes of differential item functioning.

    Science.gov (United States)

    Balluerka, Nekane; Gorostiaga, Arantxa; Gómez-Benito, Juana; Hidalgo, María Dolores

    2010-11-01

    Given that a key function of tests is to serve as evaluation instruments and for decision making in the fields of psychology and education, the possibility that some of their items may show differential behaviour is a major concern for psychometricians. In recent decades, important progress has been made as regards the efficacy of techniques designed to detect this differential item functioning (DIF). However, the findings are scant when it comes to explaining its causes. The present study addresses this problem from the perspective of multilevel analysis. Starting from a case study in the area of transcultural comparisons, multilevel logistic regression is used: 1) to identify the item characteristics associated with the presence of DIF; 2) to estimate the proportion of variation in the DIF coefficients that is explained by these characteristics; and 3) to evaluate alternative explanations of the DIF by comparing the explanatory power or fit of different sequential models. The comparison of these models confirmed one of the two alternatives (familiarity with the stimulus) and rejected the other (the topic area) as being a cause of differential functioning with respect to the compared groups.

  1. Longitudinal strain predicts left ventricular mass regression after aortic valve replacement for severe aortic stenosis and preserved left ventricular function.

    Science.gov (United States)

    Gelsomino, Sandro; Lucà, Fabiana; Parise, Orlando; Lorusso, Roberto; Rao, Carmelo Massimiliano; Vizzardi, Enrico; Gensini, Gian Franco; Maessen, Jos G

    2013-11-01

    We explored the influence of global longitudinal strain (GLS) measured with two-dimensional speckle-tracking echocardiography on left ventricular mass regression (LVMR) in patients with pure aortic stenosis (AS) and normal left ventricular function undergoing aortic valve replacement (AVR). The study population included 83 patients with severe AS (aortic valve area regression (all P regression in patients with pure AS undergoing AVR. Our findings must be confirmed by further larger studies.

  2. Development of Technology Parameter Towards Shipbuilding Productivity Predictor Using Cubic Spline Approach

    Directory of Open Access Journals (Sweden)

    Bagiyo Suwasono

    2011-05-01

    Full Text Available Ability of production processes associated with state-of-the-art technology, which allows the shipbuilding, is customized with modern equipment. It will give impact to level of productivity and competitiveness. This study proposes a nonparametric regression cubic spline approach with 1 knot, 2 knots, and 3 knots. The application programs Tibco Spotfire S+ showed that a cubic spline with 2 knots (4.25 and 4.50 gave the best result with the value of GCV = 56.21556, and R2 = 94.03%.Estimation result of cubic spline with 2 knots for the PT. Batamec shipyard = 35.61 MH/CGT, PT. Dok & Perkapalan Surabaya = 27.49 MH/CGT, PT. Karimun Sembawang Shipyard = 27.49 MH/CGT, and PT. PAL Indonesia = 19.89 MH/CGT.

  3. Stabilized Discretization in Spline Element Method for Solution of Two-Dimensional Navier-Stokes Problems

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.

  4. A Regression-based K nearest neighbor algorithm for gene function prediction from heterogeneous data

    Directory of Open Access Journals (Sweden)

    Ruzzo Walter L

    2006-03-01

    Full Text Available Abstract Background As a variety of functional genomic and proteomic techniques become available, there is an increasing need for functional analysis methodologies that integrate heterogeneous data sources. Methods In this paper, we address this issue by proposing a general framework for gene function prediction based on the k-nearest-neighbor (KNN algorithm. The choice of KNN is motivated by its simplicity, flexibility to incorporate different data types and adaptability to irregular feature spaces. A weakness of traditional KNN methods, especially when handling heterogeneous data, is that performance is subject to the often ad hoc choice of similarity metric. To address this weakness, we apply regression methods to infer a similarity metric as a weighted combination of a set of base similarity measures, which helps to locate the neighbors that are most likely to be in the same class as the target gene. We also suggest a novel voting scheme to generate confidence scores that estimate the accuracy of predictions. The method gracefully extends to multi-way classification problems. Results We apply this technique to gene function prediction according to three well-known Escherichia coli classification schemes suggested by biologists, using information derived from microarray and genome sequencing data. We demonstrate that our algorithm dramatically outperforms the naive KNN methods and is competitive with support vector machine (SVM algorithms for integrating heterogenous data. We also show that by combining different data sources, prediction accuracy can improve significantly. Conclusion Our extension of KNN with automatic feature weighting, multi-class prediction, and probabilistic inference, enhance prediction accuracy significantly while remaining efficient, intuitive and flexible. This general framework can also be applied to similar classification problems involving heterogeneous datasets.

  5. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  6. Scripted Bodies and Spline Driven Animation

    DEFF Research Database (Denmark)

    Erleben, Kenny; Henriksen, Knud

    2002-01-01

    In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain of the sp......In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...

  7. Schwarz and multilevel methods for quadratic spline collocation

    Energy Technology Data Exchange (ETDEWEB)

    Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  8. Functional regression method for whole genome eQTL epistasis analysis with sequencing data.

    Science.gov (United States)

    Xu, Kelin; Jin, Li; Xiong, Momiao

    2017-05-18

    Epistasis plays an essential rule in understanding the regulation mechanisms and is an essential component of the genetic architecture of the gene expressions. However, interaction analysis of gene expressions remains fundamentally unexplored due to great computational challenges and data availability. Due to variation in splicing, transcription start sites, polyadenylation sites, post-transcriptional RNA editing across the entire gene, and transcription rates of the cells, RNA-seq measurements generate large expression variability and collectively create the observed position level read count curves. A single number for measuring gene expression which is widely used for microarray measured gene expression analysis is highly unlikely to sufficiently account for large expression variation across the gene. Simultaneously analyzing epistatic architecture using the RNA-seq and whole genome sequencing (WGS) data poses enormous challenges. We develop a nonlinear functional regression model (FRGM) with functional responses where the position-level read counts within a gene are taken as a function of genomic position, and functional predictors where genotype profiles are viewed as a function of genomic position, for epistasis analysis with RNA-seq data. Instead of testing the interaction of all possible pair-wises SNPs, the FRGM takes a gene as a basic unit for epistasis analysis, which tests for the interaction of all possible pairs of genes and use all the information that can be accessed to collectively test interaction between all possible pairs of SNPs within two genome regions. By large-scale simulations, we demonstrate that the proposed FRGM for epistasis analysis can achieve the correct type 1 error and has higher power to detect the interactions between genes than the existing methods. The proposed methods are applied to the RNA-seq and WGS data from the 1000 Genome Project. The numbers of pairs of significantly interacting genes after Bonferroni correction

  9. Analyzing Single Molecule Localization Microscopy Data Using Cubic Splines.

    Science.gov (United States)

    Babcock, Hazen P; Zhuang, Xiaowei

    2017-04-03

    The resolution of super-resolution microscopy based on single molecule localization is in part determined by the accuracy of the localization algorithm. In most published approaches to date this localization is done by fitting an analytical function that approximates the point spread function (PSF) of the microscope. However, particularly for localization in 3D, analytical functions such as a Gaussian, which are computationally inexpensive, may not accurately capture the PSF shape leading to reduced fitting accuracy. On the other hand, analytical functions that can accurately capture the PSF shape, such as those based on pupil functions, can be computationally expensive. Here we investigate the use of cubic splines as an alternative fitting approach. We demonstrate that cubic splines can capture the shape of any PSF with high accuracy and that they can be used for fitting the PSF with only a 2-3x increase in computation time as compared to Gaussian fitting. We provide an open-source software package that measures the PSF of any microscope and uses the measured PSF to perform 3D single molecule localization microscopy analysis with reasonable accuracy and speed.

  10. The use of splines to analyze scanning tunneling microscopy data

    NARCIS (Netherlands)

    Wormeester, Herbert; Kip, Gerhardus A.M.; Sasse, A.G.B.M.; van Midden, H.J.P.

    1990-01-01

    Scanning tunneling microscopy (STM) requires a two‐dimensional (2D) image displaying technique for its interpretation. The flexibility and global approximation properties of splines, characteristic of a solid data reduction method as known from cubic spline interpolation, is called for. Splines were

  11. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    the impact of variation of parameters ri and ti on the shape of the interpolant. Some remarks are given in x 6. 2. The rational spline interpolant. Let P И fxign. iИ1 where a И x1 ` x2 ` ┴┴┴ ` xn И b, be a partition of the interval ЙaY bК, let fi, i И 1Y ... Y n be the function values at the data points. We set hi И xiЗ1 └ xiY ∆i И Е ...

  12. LIMIT STRESS SPLINE MODELS FOR GRP COMPOSITES

    African Journals Online (AJOL)

    ES OBE

    Department of Mechanical Engineering, Anambra State. University of Science and Technology, Uli ... 12 were established. The optimization of quadratic and cubic models by gradient search optimization gave the critical strain as 0.024, .... 2.2.1 Derivation of Cubic Spline Equation. The basic assumptions to be used are: 1.

  13. Weighted thin-plate spline image denoising

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Roman; Zitová, Barbara

    2003-01-01

    Roč. 36, č. 12 (2003), s. 3027-3030 ISSN 0031-3203 R&D Projects: GA ČR GP102/01/P065 Institutional research plan: CEZ:AV0Z1075907 Keywords : image denoising * thin-plate splines Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.611, year: 2003

  14. Modeling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines

    Science.gov (United States)

    Kammann, P.

    2005-12-01

    Our intention is the modeling of seismic wave propagation from displacement measurements by seismographs at the Earth's surface. The elastic behaviour of the Earth is usually described by the Cauchy-Navier equation. A system of fundamental solutions for the Fourier transformed Cauchy-Navier equation are the Hansen vectors L, M and N. We apply an inverse Fourier transform to obtain an orthonormal function system depending on time and space. By means of this system we construct certain splines, which are then used for interpolating the given data. Compared to polynomial interpolation, splines have the advantage that they minimize some curvature measure and are, therefore, smoother. First, we test this method on a synthetic wave function. Afterwards, we apply it to realistic earthquake data. (P. Kammann, Modelling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines, Diploma Thesis, Geomathematics Group, Department of Mathematics, University of Kaiserslautern, 2005)

  15. Radial basis function regression methods for predicting quantitative traits using SNP markers.

    Science.gov (United States)

    Long, Nanye; Gianola, Daniel; Rosa, Guilherme J M; Weigel, Kent A; Kranis, Andreas; González-Recio, Oscar

    2010-06-01

    A challenge when predicting total genetic values for complex quantitative traits is that an unknown number of quantitative trait loci may affect phenotypes via cryptic interactions. If markers are available, assuming that their effects on phenotypes are additive may lead to poor predictive ability. Non-parametric radial basis function (RBF) regression, which does not assume a particular form of the genotype-phenotype relationship, was investigated here by simulation and analysis of body weight and food conversion rate data in broilers. The simulation included a toy example in which an arbitrary non-linear genotype-phenotype relationship was assumed, and five different scenarios representing different broad sense heritability levels (0.1, 0.25, 0.5, 0.75 and 0.9) were created. In addition, a whole genome simulation was carried out, in which three different gene action modes (pure additive, additive+dominance and pure epistasis) were considered. In all analyses, a training set was used to fit the model and a testing set was used to evaluate predictive performance. The latter was measured by correlation and predictive mean-squared error (PMSE) on the testing data. For comparison, a linear additive model known as Bayes A was used as benchmark. Two RBF models with single nucleotide polymorphism (SNP)-specific (RBF I) and common (RBF II) weights were examined. Results indicated that, in the presence of complex genotype-phenotype relationships (i.e. non-linearity and non-additivity), RBF outperformed Bayes A in predicting total genetic values using SNP markers. Extension of Bayes A to include all additive, dominance and epistatic effects could improve its prediction accuracy. RBF I was generally better than RBF II, and was able to identify relevant SNPs in the toy example.

  16. Complex function estimation using a stochastic classification/regression framework: specific applications to image superresolution

    Science.gov (United States)

    Ni, Karl; Nguyen, Truong Q.

    2007-09-01

    A stochastic framework combining classification with nonlinear regression is proposed. The performance evaluation is tested in terms of a patch-based image superresolution problem. Assuming a multi-variate Gaussian mixture model for the distribution of all image content, unsupervised probabilistic clustering via expectation maximization allows segmentation of the domain. Subsequently, for the regression component of the algorithm, a modified support vector regression provides per class nonlinear regression while appropriately weighting the relevancy of training points during training. Relevancy is determined by probabilistic values from clustering. Support vector machines, an established convex optimization problem, provide the foundation for additional formulations of learning the kernel matrix via semi-definite programming problems and quadratically constrained quadratic programming problems.

  17. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  18. T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM

    Directory of Open Access Journals (Sweden)

    Zhenhua Han

    2017-10-01

    Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.

  19. B-spline design of digital FIR filter using evolutionary computation techniques

    Science.gov (United States)

    Swain, Manorama; Panda, Rutuparna

    2011-10-01

    In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.

  20. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  1. Application of semi-parametric single-index two-part regression and parametric two-part regression in estimation of the cost of functional gastrointestinal disorders.

    Science.gov (United States)

    Shojai, Mohadese; Kazemnejad, Anoshirvan; Zayeri, Farid; Vahedi, Mohsen

    2013-01-01

    For the purpose of cost modeling, the semi-parametric single-index two-part model was utilized in the paper. Furthermore, as functional gastrointestinal diseases which are well-known as common causes of illness among the society people in terms of both the number of patients and prevalence in a specific time interval, this research estimated the average cost of functional gastrointestinal diseases. Health care policy-makers seek for real and accurate estimations of society's future medical costs. However, data dealt with in hygienic studies have characteristics which make their analysis complicated; distribution of cost data is highly skewed since many patients pay great costs. In addition, medical costs of many persons are zero in a specific time interval. Indeed, medical costs data are often right skewed, including remarkable number of zeros, and may be distributed non-homogeneously. In modeling these costs by the semi-parametric single-index two-part model, parameters were determined by method of least squares; a result of this method was compared with the results yielded from two-part parametric model. Average costs of functional gastrointestinal diseases and their standard deviation in semi-parametric and parametric methods were yielded as $72.69±108.96 (R(2)=0.38) and $75.93±122.29 (R(2)=0.33) respectively. Based on R(2) index, the semi-parametric model is recognized as the best model. Totally, the two-part parametric regression model is a simple and available model which can be easily interpreted; on the other hand, though the single-index two-part semi-parametric model cannot be easily interpreted, it has considerable flexibility. The study goals can be indeed used as the main factor for choosing one of these two models.

  2. Cubic B-spline calibration for 3D super-resolution measurements using astigmatic imaging.

    Science.gov (United States)

    Proppert, Sven; Wolter, Steve; Holm, Thorge; Klein, Teresa; van de Linde, Sebastian; Sauer, Markus

    2014-05-05

    In recent years three-dimensional (3D) super-resolution fluorescence imaging by single-molecule localization (localization microscopy) has gained considerable interest because of its simple implementation and high optical resolution. Astigmatic and biplane imaging are experimentally simple methods to engineer a 3D-specific point spread function (PSF), but existing evaluation methods have proven problematic in practical application. Here we introduce the use of cubic B-splines to model the relationship of axial position and PSF width in the above mentioned approaches and compare the performance with existing methods. We show that cubic B-splines are the first method that can combine precision, accuracy and simplicity.

  3. GA Based Rational cubic B-Spline Representation for Still Image Interpolation

    Directory of Open Access Journals (Sweden)

    Samreen Abbas

    2016-12-01

    Full Text Available In this paper, an image interpolation scheme is designed for 2D natural images. A local support rational cubic spline with control parameters, as interpolatory function, is being optimized using Genetic Algorithm (GA. GA is applied to determine the appropriate values of control parameter used in the description of rational cubic spline. Three state-of-the-art Image Quality Assessment (IQA models with traditional one are hired for comparison with existing image interpolation schemes and perceptual quality check of resulting images. The results show that the proposed scheme is better than the existing ones in comparison.

  4. Groundwater head responses due to random stream stage fluctuations using basis splines

    Science.gov (United States)

    Knight, J. H.; Rassam, D. W.

    2007-06-01

    Stream-aquifer interactions are becoming increasingly important processes in water resources and riparian management. The linearized Boussinesq equation describes the transient movement of a groundwater free surface in unconfined flow. Some standard solutions are those corresponding to an input, which is a delta function impulse, or to its integral, a unit step function in the time domain. For more complicated inputs, the response can be expressed as a convolution integral, which must be evaluated numerically. When the input is a time series of measured data, a spline function or piecewise polynomial can easily be fitted to the data. Any such spline function can be expressed in terms of a finite series of basis splines with numerical coefficients. The analytical groundwater response functions corresponding to these basis splines are presented, thus giving a direct and accurate way to calculate the groundwater response for a random time series input representing the stream stage. We use the technique to estimate responses due to a random stream stage time series and show that the predicted heads compare favorably to those obtained from numerical simulations using the Modular Three-Dimensional Finite-Difference Ground-Water Flow Model (MODFLOW) simulations; we then demonstrate how to calculate residence times used for estimating riparian denitrification during bank storage.

  5. Comparing Linear Discriminant Function with Logistic Regression for the Two-Group Classification Problem.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin

    The Monte Carlo study compared the performance of predictive discriminant analysis (PDA) and that of logistic regression (LR) for the two-group classification problem. Prior probabilities were used for classification, but the cost of misclassification was assumed to be equal. The study used a fully crossed three-factor experimental design (with…

  6. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  7. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  8. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  9. Not EEG abnormalities but epilepsy is associated with autistic regression and mental functioning in childhood autism.

    Science.gov (United States)

    Hrdlicka, Michal; Komarek, Vladimir; Propper, Lukas; Kulisek, Robert; Zumrova, Alena; Faladova, Ludvika; Havlovicova, Marketa; Sedlacek, Zdenek; Blatny, Marek; Urbanek, Tomas

    2004-08-01

    The aim of the study was to investigate the potential association of epilepsy and EEG abnormalities with autistic regression and mental retardation. We examined a group of 77 autistic children (61 boys, 16 girls) with an average age of 9.1 +/- 5.3 years. Clinical interview, neurological examination focused on the evaluation of epilepsy, IQ testing, and 21-channel EEG (including night sleep EEG recording) were performed. Normal EEGs were observed in 44.4% of the patients, non-epileptiform abnormal EEGs in 17.5%, and abnormal EEGs with epileptiform discharges in 38.1% of the patients. Epilepsy was found in 22.1% of the subjects. A history of regression was reported in 25.8% of the patients, 54.8% of the sample had abnormal development during the first year of life, and 79.7% of the patients were mentally retarded. Autistic regression was significantly more frequent in patients with epilepsy than in non-epileptic patients (p = 0.003). Abnormal development during the first year of life was significantly associated with epileptiform EEG abnormalities (p = 0.014). Epilepsy correlated significantly with mental retardation (p = 0.001). Although the biological basis and possible causal relationships of these associations remain to be explained, they may point to different subgroups of patients with autistic spectrum disorders.

  10. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    Science.gov (United States)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and

  11. Two Dimensional Complex Wavenumber Dispersion Analysis using B-Spline Finite Elements Method

    Directory of Open Access Journals (Sweden)

    Y. Mirbagheri

    2016-01-01

    Full Text Available  Grid dispersion is one of the criteria of validating the finite element method (FEM in simulating acoustic or elastic wave propagation. The difficulty usually arisen when using this method for simulation of wave propagation problems, roots in the discontinuous field which causes the magnitude and the direction of the wave speed vector, to vary from one element to the adjacent one. To solve this problem and improve the response accuracy, two approaches are usually suggested: changing the integration method and changing shape functions. The Finite Element iso-geometric analysis (IGA is used in this research. In the IGA, the B-spline or non-uniform rational B-spline (NURBS functions are used which improve the response accuracy, especially in one-dimensional structural dynamics problems. At the boundary of two adjacent elements, the degree of continuity of the shape functions used in IGA can be higher than zero. In this research, for the first time, a two dimensional grid dispersion analysis has been used for wave propagation in plane strain problems using B-spline FEM is presented. Results indicate that, for the same degree of freedom, the grid dispersion of B-spline FEM is about half of the grid dispersion of the classic FEM.

  12. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    Energy Technology Data Exchange (ETDEWEB)

    Ruberti, M.; Averbukh, V. [Department of Physics, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Decleva, P. [Dipartimento di Scienze Chimiche, Universita’ di Trieste, Via Giorgieri 1, I-34127 Trieste (Italy)

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.

  13. Fine-Tuning Nonhomogeneous Regression for Probabilistic Precipitation Forecasts: Unanimous Predictions, Heavy Tails, and Link Functions

    DEFF Research Database (Denmark)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.

    2017-01-01

    -type approach effectively accounts for unanimous zero precipitation predictions of the global ensemble model of the ECMWF. Additionally, the statistical model uses a censored logistic distribution to deal with the heavy tails of precipitation amounts. Finally, it is investigated which are the most suitable link...... to obtain automatically corrected weather forecasts. This study applies the nonhomogenous regression framework as a state-of-the-art ensemble postprocessing technique to predict a full forecast distribution and improves its forecast performance with three statistical refinements. First of all, a novel split...

  14. Higher-order numerical solutions using cubic splines

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.

  15. Optimal Approximation of Biquartic Polynomials by Bicubic Splines

    Science.gov (United States)

    Kačala, Viliam; Török, Csaba

    2018-02-01

    Recently an unexpected approximation property between polynomials of degree three and four was revealed within the framework of two-part approximation models in 2-norm, Chebyshev norm and Holladay seminorm. Namely, it was proved that if a two-component cubic Hermite spline's first derivative at the shared knot is computed from the first derivative of a quartic polynomial, then the spline is a clamped spline of class C2 and also the best approximant to the polynomial. Although it was known that a 2 × 2 component uniform bicubic Hermite spline is a clamped spline of class C2 if the derivatives at the shared knots are given by the first derivatives of a biquartic polynomial, the optimality of such approximation remained an open question. The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.

  16. Comparison of multiple linear regression and artificial neural network in developing the objective functions of the orthopaedic screws.

    Science.gov (United States)

    Hsu, Ching-Chi; Lin, Jinn; Chao, Ching-Kong

    2011-12-01

    Optimizing the orthopaedic screws can greatly improve their biomechanical performances. However, a methodical design optimization approach requires a long time to search the best design. Thus, the surrogate objective functions of the orthopaedic screws should be accurately developed. To our knowledge, there is no study to evaluate the strengths and limitations of the surrogate methods in developing the objective functions of the orthopaedic screws. Three-dimensional finite element models for both the tibial locking screws and the spinal pedicle screws were constructed and analyzed. Then, the learning data were prepared according to the arrangement of the Taguchi orthogonal array, and the verification data were selected with use of a randomized selection. Finally, the surrogate objective functions were developed by using either the multiple linear regression or the artificial neural network. The applicability and accuracy of those surrogate methods were evaluated and discussed. The multiple linear regression method could successfully construct the objective function of the tibial locking screws, but it failed to develop the objective function of the spinal pedicle screws. The artificial neural network method showed a greater capacity of prediction in developing the objective functions for the tibial locking screws and the spinal pedicle screws than the multiple linear regression method. The artificial neural network method may be a useful option for developing the objective functions of the orthopaedic screws with a greater structural complexity. The surrogate objective functions of the orthopaedic screws could effectively decrease the time and effort required for the design optimization process. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Symbolic Regression for the Estimation of Transfer Functions of Hydrological Models

    Science.gov (United States)

    Klotz, D.; Herrnegger, M.; Schulz, K.

    2017-11-01

    Current concepts for parameter regionalization of spatially distributed rainfall-runoff models rely on the a priori definition of transfer functions that globally map land surface characteristics (such as soil texture, land use, and digital elevation) into the model parameter space. However, these transfer functions are often chosen ad hoc or derived from small-scale experiments. This study proposes and tests an approach for inferring the structure and parametrization of possible transfer functions from runoff data to potentially circumvent these difficulties. The concept uses context-free grammars to generate possible proposition for transfer functions. The resulting structure can then be parametrized with classical optimization techniques. Several virtual experiments are performed to examine the potential for an appropriate estimation of transfer function, all of them using a very simple conceptual rainfall-runoff model with data from the Austrian Mur catchment. The results suggest that a priori defined transfer functions are in general well identifiable by the method. However, the deduction process might be inhibited, e.g., by noise in the runoff observation data, often leading to transfer function estimates of lower structural complexity.

  18. Substituting random forest for multiple linear regression improves binding affinity prediction of scoring functions: Cyscore as a case study.

    Science.gov (United States)

    Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J

    2014-08-27

    State-of-the-art protein-ligand docking methods are generally limited by the traditionally low accuracy of their scoring functions, which are used to predict binding affinity and thus vital for discriminating between active and inactive compounds. Despite intensive research over the years, classical scoring functions have reached a plateau in their predictive performance. These assume a predetermined additive functional form for some sophisticated numerical features, and use standard multivariate linear regression (MLR) on experimental data to derive the coefficients. In this study we show that such a simple functional form is detrimental for the prediction performance of a scoring function, and replacing linear regression by machine learning techniques like random forest (RF) can improve prediction performance. We investigate the conditions of applying RF under various contexts and find that given sufficient training samples RF manages to comprehensively capture the non-linearity between structural features and measured binding affinities. Incorporating more structural features and training with more samples can both boost RF performance. In addition, we analyze the importance of structural features to binding affinity prediction using the RF variable importance tool. Lastly, we use Cyscore, a top performing empirical scoring function, as a baseline for comparison study. Machine-learning scoring functions are fundamentally different from classical scoring functions because the former circumvents the fixed functional form relating structural features with binding affinities. RF, but not MLR, can effectively exploit more structural features and more training samples, leading to higher prediction performance. The future availability of more X-ray crystal structures will further widen the performance gap between RF-based and MLR-based scoring functions. This further stresses the importance of substituting RF for MLR in scoring function development.

  19. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael

    2015-10-24

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  20. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  1. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Directory of Open Access Journals (Sweden)

    S. Wüst

    2017-09-01

    Full Text Available Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals – the subtraction of the spline from the original time series – are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  2. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    NARCIS (Netherlands)

    Scott, Neil W.; Fayers, Peter M.; Aaronson, Neil K.; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Gundy, Chad; Koller, Michael; Petersen, Morten A.; Sprangers, Mirjam A. G.

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise

  3. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    NARCIS (Netherlands)

    Scott, N.W.; Fayers, P.M.; Aaronson, N.K.; Bottomley, A.; de Graeff, A.; Groenvold, M.; Gundy, C.; Koller, M.; Petersen, M.A.; Sprangers, M.A.G.

    2010-01-01

    ABSTRACT: BACKGROUND: Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews

  4. Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Budi I. Setiawan

    2007-09-01

    Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.

  5. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  6. Improving Correlation Function Fitting with Ridge Regression: Application to Cross-Correlation Reconstruction

    OpenAIRE

    Matthews, Daniel J.; Newman, Jeffrey A.

    2011-01-01

    Cross-correlation techniques provide a promising avenue for calibrating photometric redshifts and determining redshift distributions using spectroscopy which is systematically incomplete (e.g., current deep spectroscopic surveys fail to obtain secure redshifts for 30-50% or more of the galaxies targeted). In this paper we improve on the redshift distribution reconstruction methods presented in Matthews & Newman (2010) by incorporating full covariance information into our correlation function ...

  7. Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set

    Directory of Open Access Journals (Sweden)

    M Barezi

    2011-03-01

    Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.

  8. BS Methods: A New Class of Spline Collocation BVMs

    Science.gov (United States)

    Mazzia, Francesca; Sestini, Alessandra; Trigiante, Donato

    2008-09-01

    BS methods are a recently introduced class of Boundary Value Methods which is based on B-splines. They can also be interpreted as spline collocation methods. For uniform meshes, the coefficients defining the k-step BS method are just the values of the (k+1)-degree uniform B-spline and B-spline derivative at its integer active knots; for general nonuniform meshes they are computed by solving local linear systems whose dimension depends on k. An important specific feature of BS methods is the possibility to associate a spline of degree k+1 and smoothness Ck to the numerical solution produced by the k-step method of this class. Such spline collocates the differential equation at the knots, shares the convergence order with the numerical solution, and can be computed with negligible additional computational cost. Here a survey on such methods is given, presenting the general definition, the convergence and stability features, and introducing the strategy for the computation of the coefficients in the B-spline basis which define the associated spline. Finally, some related numerical results are also presented.

  9. Color management with a hammer: the B-spline fitter

    Science.gov (United States)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  10. Robust inter-subject audiovisual decoding in functional magnetic resonance imaging using high-dimensional regression.

    Science.gov (United States)

    Raz, Gal; Svanera, Michele; Singer, Neomi; Gilam, Gadi; Cohen, Maya Bleich; Lin, Tamar; Admon, Roee; Gonen, Tal; Thaler, Avner; Granot, Roni Y; Goebel, Rainer; Benini, Sergio; Valente, Giancarlo

    2017-12-01

    Major methodological advancements have been recently made in the field of neural decoding, which is concerned with the reconstruction of mental content from neuroimaging measures. However, in the absence of a large-scale examination of the validity of the decoding models across subjects and content, the extent to which these models can be generalized is not clear. This study addresses the challenge of producing generalizable decoding models, which allow the reconstruction of perceived audiovisual features from human magnetic resonance imaging (fMRI) data without prior training of the algorithm on the decoded content. We applied an adapted version of kernel ridge regression combined with temporal optimization on data acquired during film viewing (234 runs) to generate standardized brain models for sound loudness, speech presence, perceived motion, face-to-frame ratio, lightness, and color brightness. The prediction accuracies were tested on data collected from different subjects watching other movies mainly in another scanner. Substantial and significant (Q FDR <0.05) correlations between the reconstructed and the original descriptors were found for the first three features (loudness, speech, and motion) in all of the 9 test movies (R¯=0.62, R¯ = 0.60, R¯ = 0.60, respectively) with high reproducibility of the predictors across subjects. The face ratio model produced significant correlations in 7 out of 8 movies (R¯=0.56). The lightness and brightness models did not show robustness (R¯=0.23, R¯ = 0). Further analysis of additional data (95 runs) indicated that loudness reconstruction veridicality can consistently reveal relevant group differences in musical experience. The findings point to the validity and generalizability of our loudness, speech, motion, and face ratio models for complex cinematic stimuli (as well as for music in the case of loudness). While future research should further validate these models using controlled stimuli and explore the

  11. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)

    2015-05-15

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  12. LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Dokken

    2015-08-01

    Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  13. FUNNEL-GSEA: FUNctioNal ELastic-net regression in time-course gene set enrichment analysis.

    Science.gov (United States)

    Zhang, Yun; Topham, David J; Thakar, Juilee; Qiu, Xing

    2017-07-01

    Gene set enrichment analyses (GSEAs) are widely used in genomic research to identify underlying biological mechanisms (defined by the gene sets), such as Gene Ontology terms and molecular pathways. There are two caveats in the currently available methods: (i) they are typically designed for group comparisons or regression analyses, which do not utilize temporal information efficiently in time-series of transcriptomics measurements; and (ii) genes overlapping in multiple molecular pathways are considered multiple times in hypothesis testing. We propose an inferential framework for GSEA based on functional data analysis, which utilizes the temporal information based on functional principal component analysis, and disentangles the effects of overlapping genes by a functional extension of the elastic-net regression. Furthermore, the hypothesis testing for the gene sets is performed by an extension of Mann-Whitney U test which is based on weighted rank sums computed from correlated observations. By using both simulated datasets and a large-scale time-course gene expression data on human influenza infection, we demonstrate that our method has uniformly better receiver operating characteristic curves, and identifies more pathways relevant to immune-response to human influenza infection than the competing approaches. The methods are implemented in R package FUNNEL, freely and publicly available at: https://github.com/yunzhang813/FUNNEL-GSEA-R-Package . xing_qiu@urmc.rochester.edu or juilee_thakar@urmc.rochester.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  14. The use of regression analysis in determining reference intervals for low hematocrit and thrombocyte count in multiple electrode aggregometry and platelet function analyzer 100 testing of platelet function.

    Science.gov (United States)

    Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D

    2017-11-01

    Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.

  15. Functional connectivity of the hippocampus in temporal lobe epilepsy: feasibility of a task-regressed seed-based approach.

    Science.gov (United States)

    Kucukboyaci, Nuri Erkut; Kemmotsu, Nobuko; Cheng, Chris E; Girard, Holly M; Tecoma, Evelyn S; Iragui, Vicente J; McDonald, Carrie R

    2013-01-01

    Resting-state functional connectivity (FC) has revealed marked network dysfunction in patients with temporal lobe epilepsy (TLE) compared to healthy controls. However, the nature and the location of these changes have not been fully elucidated nor confirmed by other methodologies. We assessed the presence of hippocampal FC changes in TLE based on the low frequency residuals of task-related functional magnetic resonance imaging data after the removal of task-related activation [i.e., task-regressed functional connectivity MRI (fcMRI)]. We employed a novel, task-regressed approach to quantify hippocampal FC, and compare hippocampal FC in 17 patients with unilateral TLE (9 left) with 17 healthy controls. Our results suggest widespread FC reductions in the mesial cortex associated with the default mode network (DMN), and some local FC increases in the lateral portions of the right hemisphere. We found more pronounced FC decreases in the left hemisphere than in the right, and these FC decreases were greatest in patients with left TLE. Moreover, the FC reductions observed between the hippocampus and posterior cingulate, inferior parietal, paracentral regions are in agreement with previous resting state studies. Consistent with the existing literature, FC reductions in TLE appear widespread with prominent reductions in the medial portion of the DMN. Our data expand the literature by demonstrating that reductions in FC may be greatest in the left hemisphere and in patients with left TLE. Overall, our findings suggest that task-regressed FC is a viable alternative to resting state and that future studies may extract similar information on network connectivity from already existing datasets.

  16. Hybrid B-Spline Collocation Method for Solving the Generalized Burgers-Fisher and Burgers-Huxley Equations

    Directory of Open Access Journals (Sweden)

    Imtiaz Wasim

    2018-01-01

    Full Text Available In this study, we introduce a new numerical technique for solving nonlinear generalized Burgers-Fisher and Burgers-Huxley equations using hybrid B-spline collocation method. This technique is based on usual finite difference scheme and Crank-Nicolson method which are used to discretize the time derivative and spatial derivatives, respectively. Furthermore, hybrid B-spline function is utilized as interpolating functions in spatial dimension. The scheme is verified unconditionally stable using the Von Neumann (Fourier method. Several test problems are considered to check the accuracy of the proposed scheme. The numerical results are in good agreement with known exact solutions and the existing schemes in literature.

  17. A Method of Calculating Functional Independence Measure at Discharge from Functional Independence Measure Effectiveness Predicted by Multiple Regression Analysis Has a High Degree of Predictive Accuracy.

    Science.gov (United States)

    Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru

    2017-09-01

    Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  18. Shape Designing of Engineering Images Using Rational Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Muhammad Sarfraz

    2015-01-01

    Full Text Available In modern days, engineers encounter a remarkable range of different engineering problems like study of structure, structure properties, and designing of different engineering images, for example, automotive images, aerospace industrial images, architectural designs, shipbuilding, and so forth. This paper purposes an interactive curve scheme for designing engineering images. The purposed scheme furnishes object designing not just in the area of engineering, but it is equally useful for other areas including image processing (IP, Computer Graphics (CG, Computer-Aided Engineering (CAE, Computer-Aided Manufacturing (CAM, and Computer-Aided Design (CAD. As a method, a piecewise rational cubic spline interpolant, with four shape parameters, has been purposed. The method provides effective results together with the effects of derivatives and shape parameters on the shape of the curves in a local and global manner. The spline method, due to its most generalized description, recovers various existing rational spline methods and serves as an alternative to various other methods including v-splines, gamma splines, weighted splines, and beta splines.

  19. Detrending of non-stationary noise data by spline techniques

    International Nuclear Information System (INIS)

    Behringer, K.

    1989-11-01

    An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs

  20. A Galerkin Solution for Burgers' Equation Using Cubic B-Spline Finite Elements

    OpenAIRE

    Soliman, A. A.

    2012-01-01

    Numerical solutions for Burgers’ equation based on the Galerkins’ method using cubic B-splines as both weight and interpolation functions are set up. It is shown that this method is capable of solving Burgers’ equation accurately for values of viscosity ranging from very small to large. Three standard problems are used to validate the proposed algorithm. A linear stability analysis shows that a numerical scheme based on a Cranck-Nicolson approximation in time is unconditionally stable.

  1. A Galerkin Solution for Burgers' Equation Using Cubic B-Spline Finite Elements

    Directory of Open Access Journals (Sweden)

    A. A. Soliman

    2012-01-01

    Full Text Available Numerical solutions for Burgers’ equation based on the Galerkins’ method using cubic B-splines as both weight and interpolation functions are set up. It is shown that this method is capable of solving Burgers’ equation accurately for values of viscosity ranging from very small to large. Three standard problems are used to validate the proposed algorithm. A linear stability analysis shows that a numerical scheme based on a Cranck-Nicolson approximation in time is unconditionally stable.

  2. Cubic spline reflectance estimates using the Viking lander camera multispectral data

    Science.gov (United States)

    Park, S. K.; Huck, F. O.

    1976-01-01

    A technique was formulated for constructing spectral reflectance estimates from multispectral data obtained with the Viking lander cameras. The output of each channel was expressed as a linear function of the unknown spectral reflectance producing a set of linear equations which were used to determine the coefficients in a representation of the spectral reflectance estimate as a natural cubic spline. The technique was used to produce spectral reflectance estimates for a variety of actual and hypothetical spectral reflectances.

  3. Towards Finding the Global Minimum of the D-Wave Objective Function for Improved Neural Network Regressions

    Science.gov (United States)

    Dorband, J. E.

    2017-12-01

    The D-Wave 2X has successfully been used for regression analysis to derive carbon flux data from OCO-2 CO2 concentration using neural networks. The samples returned from the D-Wave should represent the minimum of an objective function presented to it. An accurate as possible minimum function value is needed for this analysis. Samples from the D-Wave are near minimum, but seldom are the global minimum of the function due to quantum noise. Two methods for improving the accuracy of minimized values represented by the samples returned from the D-Wave are presented. The first method finds a new sample with a minimum value near each returned D-Wave sample. The second method uses all the returned samples to find a more global minimum sample. We present three use-cases performed using the former method. In the first use case, it is demonstrated that an objective function with random qubits and coupler coefficients had an improved minimum. In the second use case, the samples corrected by the first method can improve the training of a Boltzmann machine neural network. The third use case demonstrated that using the first method can improve virtual qubit accuracy.The later method was also performed on the first use case.

  4. Determination of regression functions for the charging and discharging processes of valve regulated lead-acid batteries

    Directory of Open Access Journals (Sweden)

    Vukić Vladimir Đ.

    2012-01-01

    Full Text Available Following a deep discharge of AGM SVT 300 valve-regulated lead-acid batteries using the ten-hour discharge current, the batteries were charged using variable current. In accordance with the obtained results, exponential and polynomial functions for the approximation of the specified processes were analyzed. The main evaluation instrument for the quality of the implemented approximations was the adjusted coefficient of determination R-2. It was perceived that the battery discharge process might be successfully approximated with both an exponential and the second order polynomial function. On all the occasions analyzed, values of the adjusted coefficient of determination were greater than 0.995. The charging process of the deeply discharged batteries was successfully approximated with the exponential function; the measured values of the adjusted coefficient of determination being nearly 0.95. Apart from the high measured values of the adjusted coefficient of determination, polynomial approximations of the second and third order did not provide satisfactory results regarding the interpolation of the battery charging characteristics. A possibility for a practical implementation of the procured regression functions in uninterruptible power supply systems was described.

  5. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  6. Identification of Hammerstein models with cubic spline nonlinearities.

    Science.gov (United States)

    Dempsey, Erika J; Westwick, David T

    2004-02-01

    This paper considers the use of cubic splines, instead of polynomials, to represent the static nonlinearities in block structured models. It introduces a system identification algorithm for the Hammerstein structure, a static nonlinearity followed by a linear filter, where cubic splines represent the static nonlinearity and the linear dynamics are modeled using a finite impulse response filter. The algorithm uses a separable least squares Levenberg-Marquardt optimization to identify Hammerstein cascades whose nonlinearities are modeled by either cubic splines or polynomials. These algorithms are compared in simulation, where the effects of variations in the input spectrum and distribution, and those of the measurement noise are examined. The two algorithms are used to fit Hammerstein models to stretch reflex electromyogram (EMG) data recorded from a spinal cord injured patient. The model with the cubic spline nonlinearity provides more accurate predictions of the reflex EMG than the polynomial based model, even in novel data.

  7. A comparison of three sets of criteria for determining the presence of differential item functioning using ordinal logistic regression.

    Science.gov (United States)

    Crane, Paul K; Gibbons, Laura E; Ocepek-Welikson, Katja; Cook, Karon; Cella, David; Narasimhalu, Kaavya; Hays, Ron D; Teresi, Jeanne A

    2007-01-01

    Several techniques have been developed to detect differential item functioning (DIF), including ordinal logistic regression (OLR). This study compared different criteria for determining whether items have DIF using OLR. To compare and contrast findings from three different sets of criteria for detecting DIF using OLR. General distress and physical functioning items were evaluated for DIF related to five covariates: age, marital status, gender, race, and Hispanic origin. Cross-sectional study. 1,714 patients with cancer or HIV/AIDS. A total of 23 items addressing physical functioning and 15 items addressing general distress were selected from a pool of 154 items from four different health-related quality of life questionnaires. The three sets of criteria produced qualitatively and quantitatively different results. Criteria based on statistical significance alone detected DIF in almost all the items, while alternative criteria based on magnitude detected DIF in far fewer items. Accounting for DIF by using demographic-group specific item parameters had negligible effects on individual scores, except for race. Specific criteria chosen to determine whether items have DIF have an impact on the findings. Criteria based entirely on statistical significance may detect small differences that are clinically negligible.

  8. Paradigm free mapping with sparse regression automatically detects single-trial functional magnetic resonance imaging blood oxygenation level dependent responses.

    Science.gov (United States)

    Caballero Gaudes, César; Petridou, Natalia; Francis, Susan T; Dryden, Ian L; Gowland, Penny A

    2013-03-01

    The ability to detect single trial responses in functional magnetic resonance imaging (fMRI) studies is essential, particularly if investigating learning or adaptation processes or unpredictable events. We recently introduced paradigm free mapping (PFM), an analysis method that detects single trial blood oxygenation level dependent (BOLD) responses without specifying prior information on the timing of the events. PFM is based on the deconvolution of the fMRI signal using a linear hemodynamic convolution model. Our previous PFM method (Caballero-Gaudes et al., 2011: Hum Brain Mapp) used the ridge regression estimator for signal deconvolution and required a baseline signal period for statistical inference. In this work, we investigate the application of sparse regression techniques in PFM. In particular, a novel PFM approach is developed using the Dantzig selector estimator, solved via an efficient homotopy procedure, along with statistical model selection criteria. Simulation results demonstrated that, using the Bayesian information criterion to select the regularization parameter, this method obtains high detection rates of the BOLD responses, comparable with a model-based analysis, but requiring no information on the timing of the events and being robust against hemodynamic response function variability. The practical operation of this sparse PFM method was assessed with single-trial fMRI data acquired at 7T, where it automatically detected all task-related events, and was an improvement on our previous PFM method, as it does not require the definition of a baseline state and amplitude thresholding and does not compromise on specificity and sensitivity. Copyright © 2011 Wiley Periodicals, Inc.

  9. Finite nucleus Dirac mean field theory and random phase approximation using finite B splines

    International Nuclear Information System (INIS)

    McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)

    1989-01-01

    We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results

  10. Modeling of type-2 fuzzy cubic B-spline surface for flood data problem in Malaysia

    Science.gov (United States)

    Bidin, Mohd Syafiq; Wahab, Abd. Fatah

    2017-08-01

    Malaysia possesses a low and sloping land areas which may cause flood. The flood phenomenon can be analyzed if the surface data of the study area can be modeled by geometric modeling. Type-2 fuzzy data for the flood data is defined using type-2 fuzzy set theory in order to solve the uncertainty of complex data. Then, cubic B-spline surface function is used to produce a smooth surface. Three main processes are carried out to find a solution to crisp type-2 fuzzy data which is fuzzification (α-cut operation), type-reduction and defuzzification. Upon conducting these processes, Type-2 Fuzzy Cubic B-Spline Surface Model is applied to visualize the surface data of the flood areas that are complex uncertainty.

  11. Numerical simulation of reaction-diffusion systems by modified cubic B-spline differential quadrature method

    International Nuclear Information System (INIS)

    Mittal, R.C.; Rohila, Rajni

    2016-01-01

    In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.

  12. Numerical solution of fractional differential equations using cubic B-spline wavelet collocation method

    Science.gov (United States)

    Li, Xinxiu

    2012-10-01

    Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.

  13. Spline- and wavelet-based models of neural activity in response to natural visual stimulation.

    Science.gov (United States)

    Gerhard, Felipe; Szegletes, Luca

    2012-01-01

    We present a comparative study of the performance of different basis functions for the nonparametric modeling of neural activity in response to natural stimuli. Based on naturalistic video sequences, a generative model of neural activity was created using a stochastic linear-nonlinear-spiking cascade. The temporal dynamics of the spiking response is well captured with cubic splines with equidistant knot spacings. Whereas a sym4-wavelet decomposition performs competitively or only slightly worse than the spline basis, Haar wavelets (or histogram-based models) seem unsuitable for faithfully describing the temporal dynamics of the sensory neurons. This tendency was confirmed with an application to a real data set of spike trains recorded from visual cortex of the awake monkey.

  14. Decomposition of LiDAR waveforms by B-spline-based modeling

    Science.gov (United States)

    Shen, Xiang; Li, Qing-Quan; Wu, Guofeng; Zhu, Jiasong

    2017-06-01

    Waveform decomposition is a widely used technique for extracting echoes from full-waveform LiDAR data. Most previous studies recommended the Gaussian decomposition approach, which employs the Gaussian function in laser pulse modeling. As the Gaussian-shape assumption is not always satisfied for real LiDAR waveforms, some other probability distributions (e.g., the lognormal distribution, the generalized normal distribution, and the Burr distribution) have also been introduced by researchers to fit sharply-peaked and/or heavy-tailed pulses. However, these models cannot be universally used, because they are only suitable for processing the LiDAR waveforms in particular shapes. In this paper, we present a new waveform decomposition algorithm based on the B-spline modeling technique. LiDAR waveforms are not assumed to have a priori shapes but rather are modeled by B-splines, and the shape of a received waveform is treated as the mixture of finite transmitted pulses after translation and scaling transformation. The performance of the new model was tested using two full-waveform data sets acquired by a Riegl LMS-Q680i laser scanner and an Optech Aquarius laser bathymeter, comparing with three classical waveform decomposition approaches: the Gaussian, generalized normal, and lognormal distribution-based models. The experimental results show that the B-spline model performed the best in terms of waveform fitting accuracy, while the generalized normal model yielded the worst performance in the two test data sets. Riegl waveforms have nearly Gaussian pulse shapes and were well fitted by the Gaussian mixture model, while the B-spline-based modeling algorithm produced a slightly better result by further reducing 6.4% of fitting residuals, largely benefiting from alleviating the adverse impact of the ringing effect. The pulse shapes of Optech waveforms, on the other hand, are noticeably right-skewed. The Gaussian modeling results deviated significantly from original signals, and

  15. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines

    Science.gov (United States)

    Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya

    2017-11-01

    Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1

  16. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  17. The Dynamics of microRNA Transcriptome in Bovine Corpus Luteum during Its Formation, Function, and Regression

    Directory of Open Access Journals (Sweden)

    Rreze M. Gecaj

    2017-12-01

    Full Text Available The formation, function, and subsequent regression of the ovarian corpus luteum (CL are dynamic processes that enable ovary cyclical activity. Studies in whole ovary tissue have found microRNAs (miRNAs to by critical for ovary function. However, relatively little is known about the role of miRNAs in the bovine CL. Utilizing small RNA next-generation sequencing we profiled miRNA transcriptome in bovine CL during the entire physiological estrous cycle, by sampling the CL on days: d 1–2, d 3–4, and d 5–7 (early CL, eCL, d 8–12 (mid CL, mCL, d 13–16 (late CL, lCL, and d > 18 (regressed CL, rCL. We characterized patterns of miRNAs abundance and identified 42 miRNAs that were consistent significantly different expressed (DE in the eCL relative to their expression at each of the analyzed stages (mCL, lCL, and rCL. Out of these, bta-miR-210-3p, −2898, −96, −7-5p, −183-5p, −182, and −202 showed drastic up-regulation with a fold-change of ≥2.0 and adjusted P < 0.01 in the eCL, while bta-miR-146a was downregulated at lCL and rCL vs. the eCL. Another 24, 11, and 21 miRNAs were significantly DE only between individual comparisons, eCL vs. the mCL, lCL, and rCL, respectively. Irrespective of cycle stage two miRNAs, bta-miR-21-5p and bta-miR-143 were identified as the most abundant miRNAs species and show opposing expression abundance. Whilst bta-miR-21-5p peaked in number of reads in the eCL and was significantly downregulated in the mCL and lCL, bta-miR-143 reached its peak in the rCL and is significantly downregulated in the eCL. MiRNAs with significant DE in at least one cycle stage (CL class were further grouped into eight distinct clusters by the self-organizing tree algorithm (SOTA. Half of the clusters contain miRNAs with low-expression, whilst the other half contain miRNAs with high-expression levels during eCL. Prediction analysis for significantly DE miRNAs resulted in target genes involved with CL formation

  18. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    International Nuclear Information System (INIS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-01-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested

  19. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Energy Technology Data Exchange (ETDEWEB)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  20. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Science.gov (United States)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  1. Nonlinear registration using B-spline feature approximation and image similarity

    Science.gov (United States)

    Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-07-01

    The warping methods are broadly classified into the image-matching method based on similar pixel intensity distribution and the feature-matching method using distinct anatomical feature. Feature based methods may fail to match local variation of two images. However, the method globally matches features well. False matches corresponding to local minima of the underlying energy functions can be obtained through the similarity based methods. To avoid local minima problem, we proposes non-linear deformable registration method utilizing global information of feature matching and the local information of image matching. To define the feature, gray matter and white matter of brain tissue are segmented by Fuzzy C-Mean (FCM) algorithm. B-spline approximation technique is used for feature matching. We use a multi-resolution B-spline approximation method which modifies multilevel B-spline interpolation method. It locally changes the resolution of the control lattice in proportion to the distance between features of two images. Mutual information is used for similarity measure. The deformation fields are locally refined until maximize the similarity. In two 3D T1 weighted MRI test, this method maintained the accuracy by conventional image matching methods without the local minimum problem.

  2. A quadratic spline maximum entropy method for the computation of invariant densities

    Directory of Open Access Journals (Sweden)

    DING Jiu

    2015-06-01

    Full Text Available The numerical recovery of an invariant density of the Frobenius-Perron operator corresponding to a nonsingular transformation is depicted by using quadratic spline functions. We implement a maximum entropy method to approximate the invariant density. The proposed method removes the ill-conditioning in the maximum entropy method, which arises by the use of polynomials. Due to the smoothness of the functions and a good convergence rate, the accuracy in the numerical calculation increases rapidly as the number of moment functions increases. The numerical results from the proposed method are supported by the theoretical analysis.

  3. A B-Spline Framework for Smooth Derivative Computation in Well Test Analysis Using Diagnostic Plots.

    Science.gov (United States)

    Tago, Josué; Hernández-Espriú, Antonio

    2018-01-01

    In the oil and gas industry, well test analysis using derivative plots, has been the core technique in examining reservoir and well behavior over the last three decades. Recently, diagnostics plots have gained recognition in the field of hydrogeology; however, this tool is still underused by groundwater professionals. The foremost drawback is that the derivative function must be computed from noisy field measurements, usually based on finite-difference schemes, which complicates the analysis. We propose a B-spline framework for smooth derivative computation, referred to as Constrained Quartic B-Splines with Free Knots. The approach presents the following novelties in relation to methodological precedents: (1) the use of automatic equality derivative constraints, (2) a knot removal strategy and (3) the introduction of a Boolean shape parameter that defines the number of initial knots to choose. These can lead to evaluate both simple (manually recorded drawdown measurements) and complex (transducer measured records) datasets. Furthermore, we propose an additional shape preserving smoothing preprocess procedure, as a simple, fast and robust method to deal with extremely noisy signals. Our framework was tested in four pumping tests by comparing the spline derivative with regards to the Bourdet algorithm, and we found that the latter is rather noisy (even for large differentiation intervals) and the second derivative response is basically unreadable. In contrast, the spline first and second derivative led to smoother responses, which are more suitable for interpretation. We concluded that the proposed framework is a welcome contribution to evaluate reliable aquifer tests using derivative-diagnostic plots. © 2017, National Ground Water Association.

  4. Micropolar Fluids Using B-spline Divergence Conforming Spaces

    KAUST Repository

    Sarmiento, Adel

    2014-06-06

    We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.

  5. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    Science.gov (United States)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  6. Logistic Regression

    Science.gov (United States)

    Grégoire, G.

    2014-12-01

    The logistic regression originally is intended to explain the relationship between the probability of an event and a set of covariables. The model's coefficients can be interpreted via the odds and odds ratio, which are presented in introduction of the chapter. The observations are possibly got individually, then we speak of binary logistic regression. When they are grouped, the logistic regression is said binomial. In our presentation we mainly focus on the binary case. For statistical inference the main tool is the maximum likelihood methodology: we present the Wald, Rao and likelihoods ratio results and their use to compare nested models. The problems we intend to deal with are essentially the same as in multiple linear regression: testing global effect, individual effect, selection of variables to build a model, measure of the fitness of the model, prediction of new values… . The methods are demonstrated on data sets using R. Finally we briefly consider the binomial case and the situation where we are interested in several events, that is the polytomous (multinomial) logistic regression and the particular case of ordinal logistic regression.

  7. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  8. Stability of Spline-Type Systems in the Abelian Case

    Directory of Open Access Journals (Sweden)

    Darian Onchis

    2017-12-01

    Full Text Available In this paper, the stability of translation-invariant spaces of distributions over locally compact groups is stated as boundedness of synthesis and projection operators. At first, a characterization of the stability of spline-type spaces is given, in the standard sense of the stability for shift-invariant spaces, that is, linear independence characterizes lower boundedness of the synthesis operator in Banach spaces of distributions. The constructive nature of the proof for Theorem 2 enabled us to constructively realize the biorthogonal system of a given one. Then, inspired by the multiresolution analysis and the Lax equivalence for general discretization schemes, we approached the stability of a sequence of spline-type spaces as uniform boundedness of projection operators. Through Theorem 3, we characterize stable sequences of stable spline-type spaces.

  9. Stability of Spline-Type Systems in the Abelian Case.

    Science.gov (United States)

    Onchis, Darian; Zappalà, Simone

    2017-12-27

    In this paper, the stability of translation-invariant spaces of distributions over locally compact groups is stated as boundedness of synthesis and projection operators. At first, a characterization of the stability of spline-type spaces is given, in the standard sense of the stability for shift-invariant spaces, that is, linear independence characterizes lower boundedness of the synthesis operator in Banach spaces of distributions. The constructive nature of the proof for Theorem 2 enabled us to constructively realize the biorthogonal system of a given one. Then, inspired by the multiresolution analysis and the Lax equivalence for general discretization schemes, we approached the stability of a sequence of spline-type spaces as uniform boundedness of projection operators. Through Theorem 3, we characterize stable sequences of stable spline-type spaces.

  10. Lecture notes on ridge regression

    OpenAIRE

    van Wieringen, Wessel N.

    2015-01-01

    The linear regression model cannot be fitted to high-dimensional data, as the high-dimensionality brings about empirical non-identifiability. Penalized regression overcomes this non-identifiability by augmentation of the loss function by a penalty (i.e. a function of regression coefficients). The ridge penalty is the sum of squared regression coefficients, giving rise to ridge regression. Here many aspect of ridge regression are reviewed e.g. moments, mean squared error, its equivalence to co...

  11. BRGLM, Interactive Linear Regression Analysis by Least Square Fit

    International Nuclear Information System (INIS)

    Ringland, J.T.; Bohrer, R.E.; Sherman, M.E.

    1985-01-01

    1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems

  12. Formation of Reflecting Surfaces Based on Spline Methods

    Science.gov (United States)

    Zamyatin, A. V.; Zamyatina, E. A.

    2017-11-01

    The article deals with problem of reflecting barriers surfaces generation by spline methods. The cases of reflection when a geometric model is applied are considered. The surfaces of reflecting barriers are formed in such a way that they contain given points and the rays reflected at these points and hit at the defined points of specified surface. The reflecting barrier surface is formed by cubic splines. It enables a comparatively simple implementation of proposed algorithms in the form of software applications. The algorithms developed in the article can be applied in architecture and construction design for reflecting surface generation in optics and acoustics providing the geometrical model of reflex processes is used correctly.

  13. Interpolation in numerical optimization. [by cubic spline generation

    Science.gov (United States)

    Hall, K. R.; Hull, D. G.

    1975-01-01

    The present work discusses the generation of the cubic-spline interpolator in numerical optimization methods which use a variable-step integrator with step size control based on local relative truncation error. An algorithm for generating the cubic spline with successive over-relaxation is presented which represents an improvement over that given by Ralston and Wilf (1967). Rewriting the code reduces the number of N-vectors from eight to one. The algorithm is formulated in such a way that the solution of the linear system set up yields the first derivatives at the nodal points. This method is as accurate as other schemes but requires the minimum amount of storage.

  14. Control theoretic splines optimal control, statistical, and path planning

    CERN Document Server

    Egerstedt, Magnus

    2010-01-01

    Splines, both interpolatory and smoothing, have a long and rich history that has largely been application driven. This book unifies these constructions in a comprehensive and accessible way, drawing from the latest methods and applications to show how they arise naturally in the theory of linear control systems. Magnus Egerstedt and Clyde Martin are leading innovators in the use of control theoretic splines to bring together many diverse applications within a common framework. In this book, they begin with a series of problems ranging from path planning to statistics to approximation.

  15. Estimating the input function non-invasively for FDG-PET quantification with multiple linear regression analysis: simulation and verification with in vivo data

    International Nuclear Information System (INIS)

    Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih

    2004-01-01

    A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.)

  16. A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation

    Directory of Open Access Journals (Sweden)

    S. Battal Gazi Karakoç

    2016-02-01

    Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.  

  17. Gaussian quadrature rules for C 1 quintic splines with uniform knot vectors

    KAUST Repository

    Bartoň, Michael

    2017-03-21

    We provide explicit quadrature rules for spaces of C1C1 quintic splines with uniform knot sequences over finite domains. The quadrature nodes and weights are derived via an explicit recursion that avoids numerical solvers. Each rule is optimal, that is, requires the minimal number of nodes, for a given function space. For each of nn subintervals, generically, only two nodes are required which reduces the evaluation cost by 2/32/3 when compared to the classical Gaussian quadrature for polynomials over each knot span. Numerical experiments show fast convergence, as nn grows, to the “two-third” quadrature rule of Hughes et al. (2010) for infinite domains.

  18. Steady-state solution of the PTC thermistor problem using a quadratic spline finite element method

    Directory of Open Access Journals (Sweden)

    Bahadir A. R.

    2002-01-01

    Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.

  19. Differential constraints for bounded recursive identification with multivariate splines

    NARCIS (Netherlands)

    De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2011-01-01

    The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able

  20. Approximate Implicitization of Parametric Curves Using Cubic Algebraic Splines

    Directory of Open Access Journals (Sweden)

    Xiaolei Zhang

    2009-01-01

    Full Text Available This paper presents an algorithm to solve the approximate implicitization of planar parametric curves using cubic algebraic splines. It applies piecewise cubic algebraic curves to give a global G2 continuity approximation to planar parametric curves. Approximation error on approximate implicitization of rational curves is given. Several examples are provided to prove that the proposed method is flexible and efficient.

  1. Cubic spline approximation techniques for parameter estimation in distributed systems

    Science.gov (United States)

    Banks, H. T.; Crowley, J. M.; Kunisch, K.

    1983-01-01

    Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.

  2. Connecting the Dots Parametrically: An Alternative to Cubic Splines.

    Science.gov (United States)

    Hildebrand, Wilbur J.

    1990-01-01

    Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)

  3. Counterexamples to the B-spline Conjecture for Gabor Frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Nielsen, Kamilla Haahr

    2016-01-01

    The frame set conjecture for B-splines Bn, n≥2, states that the frame set is the maximal set that avoids the known obstructions. We show that any hyperbola of the form ab=r, where r is a rational number smaller than one and a and b denote the sampling and modulation rates, respectively, has infin...

  4. Kriging and thin plate splines for mapping climate variables

    NARCIS (Netherlands)

    Boer, E.P.J.; Beurs, de K.M.; Hartkamp, A.D.

    2001-01-01

    Four forms of kriging and three forms of thin plate splines are discussed in this paper to predict monthly maximum temperature and monthly mean precipitation in Jalisco State of Mexico. Results show that techniques using elevation as additional information improve the prediction results

  5. Fraktal Regress

    Directory of Open Access Journals (Sweden)

    Igor K. Kochanenko

    2013-01-01

    Full Text Available Procedures of construction of curve regress by criterion of the least fractals, i.e. the greatest probability of the sums of degrees of the least deviations measured intensity from their modelling values are proved. The exponent is defined as fractal dimension of a time number. The difference of results of a well-founded method and a method of the least squares is quantitatively estimated.

  6. Evaluation of the spline reconstruction technique for PET

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Fernández, Yolanda [Centre d’Imatge Molecular Experimental (CIME), CETIR-ERESA, Barcelona 08950 (Spain); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London NW1 2BU (United Kingdom); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA (United Kingdom)

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  7. Comparison of K-Means Clustering with Linear Probability Model, Linear Discriminant Function, and Logistic Regression for Predicting Two-Group Membership.

    Science.gov (United States)

    So, Tak-Shing Harry; Peng, Chao-Ying Joanne

    This study compared the accuracy of predicting two-group membership obtained from K-means clustering with those derived from linear probability modeling, linear discriminant function, and logistic regression under various data properties. Multivariate normally distributed populations were simulated based on combinations of population proportions,…

  8. Binary Logistic Regression Analysis for Detecting Differential Item Functioning: Effectiveness of R[superscript 2] and Delta Log Odds Ratio Effect Size Measures

    Science.gov (United States)

    Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D.

    2014-01-01

    The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…

  9. Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation. Research Report. ETS RR-09-35

    Science.gov (United States)

    Paek, Insu

    2009-01-01

    Three statistical testing procedures well-known in the maximum likelihood approach are the Wald, likelihood ratio (LR), and score tests. Although well-known, the application of these three testing procedures in the logistic regression method to investigate differential item function (DIF) has not been rigorously made yet. Employing a variety of…

  10. Software Regression Verification

    Science.gov (United States)

    2013-12-11

    of recursive procedures. Acta Informatica , 45(6):403 – 439, 2008. [GS11] Benny Godlin and Ofer Strichman. Regression verifica- tion. Technical Report...functions. Therefore, we need to rede - fine m-term. – Mutual termination. If either function f or function f ′ (or both) is non- deterministic, then their

  11. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  12. B-spline solver for one-electron Schrödinger equation

    Science.gov (United States)

    Romanowski, Zbigniew

    2011-11-01

    A numerical algorithm for solving the one-electron Schrödinger equation is presented. The algorithm is based on the Finite Element method, and the basis functions are tensor products of univariate B-splines. The application of cubic or higher order B-splines guarantees that the searched solution belongs to a continuous and one time differentiable function space, which is a desirable property in the Kohn-Sham equation context from the Density Functional Theory with pseudopotential approximation. The theoretical background of the numerical algorithm is presented, and additionally, the implementation on parallel computers with distributed memory is described. The current implementation of the algorithm uses the MPI, HYPRE and ParMETIS libraries to distribute matrices on processing units. Additionally, the POBPC algorithm from HYPRE library is used to solve the algebraic generalized eigenvalue problem. The proposed algorithm works for any smooth interaction potential, where the domain of the problem is a finite subspace of the ℝ3 space. The accuracy of the algorithm is demonstrated for a selected interaction potential. In the current stage, the algorithm can be applied to solve the linearized Kohn-Sham equation for molecular systems.

  13. Fitting Cox Models with Doubly Censored Data Using Spline-Based Sieve Marginal Likelihood

    Science.gov (United States)

    Li, Zhiguo; Owzar, Kouros

    2015-01-01

    In some applications, the failure time of interest is the time from an originating event to a failure event, while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline-based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual nonparametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time. PMID:27239090

  14. Investigation of electron and hydrogenic-donor states confined in a permeable spherical box using B-splines

    Directory of Open Access Journals (Sweden)

    T Nikbakht

    2012-12-01

    Full Text Available   Effects of quantum size and potential shape on the spectra of an electron and a hydrogenic-donor at the center of a permeable spherical cavity have been calculated, using linear variational method. B-splines have been used as basis functions. By extensive convergence tests and comparing with other results given in the literature, the validity and efficiency of the method were confirmed.

  15. Statistical modelling for precision agriculture: A case study in optimal environmental schedules for Agaricus Bisporus production via variable domain functional regression.

    Science.gov (United States)

    Panayi, Efstathios; Peters, Gareth W; Kyriakides, George

    2017-01-01

    Quantifying the effects of environmental factors over the duration of the growing process on Agaricus Bisporus (button mushroom) yields has been difficult, as common functional data analysis approaches require fixed length functional data. The data available from commercial growers, however, is of variable duration, due to commercial considerations. We employ a recently proposed regression technique termed Variable-Domain Functional Regression in order to be able to accommodate these irregular-length datasets. In this way, we are able to quantify the contribution of covariates such as temperature, humidity and water spraying volumes across the growing process, and for different lengths of growing processes. Our results indicate that optimal oxygen and temperature levels vary across the growing cycle and we propose environmental schedules for these covariates to optimise overall yields.

  16. Predicting risk for portal vein thrombosis in acute pancreatitis patients: A comparison of radical basis function artificial neural network and logistic regression models.

    Science.gov (United States)

    Fei, Yang; Hu, Jian; Gao, Kun; Tu, Jianfeng; Li, Wei-Qin; Wang, Wei

    2017-06-01

    To construct a radical basis function (RBF) artificial neural networks (ANNs) model to predict the incidence of acute pancreatitis (AP)-induced portal vein thrombosis. The analysis included 353 patients with AP who had admitted between January 2011 and December 2015. RBF ANNs model and logistic regression model were constructed based on eleven factors relevant to AP respectively. Statistical indexes were used to evaluate the value of the prediction in two models. The predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by RBF ANNs model for PVT were 73.3%, 91.4%, 68.8%, 93.0% and 87.7%, respectively. There were significant differences between the RBF ANNs and logistic regression models in these parameters (Plogistic regression model. D-dimer, AMY, Hct and PT were important prediction factors of approval for AP-induced PVT. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Iteratively re-weighted bi-cubic spline representation of corneal topography and its comparison to the standard methods.

    Science.gov (United States)

    Zhu, Zhongxia; Janunts, Edgar; Eppig, Timo; Sauer, Tomas; Langenbucher, Achim

    2010-01-01

    The aim of this study is to represent the corneal anterior surface by utilizing radius and height data extracted from a TMS-2N topographic system with three different mathematical approaches and to simulate the visual performance. An iteratively re-weighted bi-cubic spline method is introduced for the local representation of the corneal surface. For comparison, two standard mathematical global representation approaches are used: the general quadratic function and the higher order Taylor polynomial approach. First, these methods were applied in simulations using three corneal models. Then, two real eye examples were investigated: one eye with regular astigmatism, and one eye which had undergone refractive surgery. A ray-tracing program was developed to evaluate the imaging performance of these examples with each surface representation strategy at the best focus plane. A 6 mm pupil size was chosen for the simulation. The fitting error (deviation) of the presented methods was compared. It was found that the accuracy of the topography representation was worst using the quadratic function and best with bicubic spline. The quadratic function cannot precisely describe the irregular corneal shape. In order to achieve a sub-micron fitting precision, the Taylor polynomial's order selection behaves adaptive to the corneal shape. The bi-cubic spline shows more stable performance. Considering the visual performance, the more precise the cornea representation is, the worse the visual performance is. The re-weighted bi-cubic spline method is a reasonable and stable method for representing the anterior corneal surface in measurements using a Placido-ring-pattern-based corneal topographer. Copyright © 2010. Published by Elsevier GmbH.

  18. Multivariate Epi-splines and Evolving Function Identification Problems

    Science.gov (United States)

    2015-04-15

    that ∪Nk=1 clRk = B avoids wasting computational effort on uninteresting parts of IR n. In general, the specific choice of R is guided by the...280(1):1–41, 1983. [3] O. Bandorff-Nielsen. Information and Exponential Families in Statistical Theory. Wiley, 1978. [4] G. Beer . Topologies on Closed

  19. The nuisance of nuisance regression: spectral misspecification in a common approach to resting-state fMRI preprocessing reintroduces noise and obscures functional connectivity.

    Science.gov (United States)

    Hallquist, Michael N; Hwang, Kai; Luna, Beatriz

    2013-11-15

    Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent reintroduction of nuisance-related variation into frequencies previously suppressed by the bandpass filter, as well as suboptimal correction for noise signals in the frequencies of interest. This is important because many RS-fcMRI studies, including some focusing on motion-related artifacts, have applied this approach. In two cohorts of individuals (n=117 and 22) who completed resting-state fMRI scans, we found that the bandpass-regress approach consistently overestimated functional connectivity across the brain, typically on the order of r=.10-.35, relative to a simultaneous bandpass filtering and nuisance regression approach. Inflated correlations under the bandpass-regress approach were associated with head motion and cardiac artifacts. Furthermore, distance-related differences in the association of head motion and connectivity estimates were much weaker for the simultaneous filtering approach. We recommend that future RS-fcMRI studies ensure that the frequencies of nuisance regressors and fMRI data match prior to nuisance regression, and we advocate a simultaneous bandpass filtering and nuisance regression strategy that better controls nuisance-related variability. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    Science.gov (United States)

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  1. Data reduction using cubic rational B-splines

    Science.gov (United States)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  2. Monotonicity preserving splines using rational cubic Timmer interpolation

    Science.gov (United States)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  3. High-order numerical solutions using cubic splines

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.

  4. Isotherms and thermodynamics by linear and non-linear regression analysis for the sorption of methylene blue onto activated carbon: Comparison of various error functions

    International Nuclear Information System (INIS)

    Kumar, K. Vasanth; Porkodi, K.; Rocha, F.

    2008-01-01

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r 2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r 2 ), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r 2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K 2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm

  5. Spline Truncated Multivariabel pada Permodelan Nilai Ujian Nasional di Kabupaten Lombok Barat

    Directory of Open Access Journals (Sweden)

    Nurul Fitriyani

    2017-12-01

    Full Text Available Regression model is used to analyze the relationship between dependent variable and independent variable. If the regression curve form is not known, then the regression curve estimation can be done by nonparametric regression approach. This study aimed to investigate the relationship between the value resulted by National Examination and the factors that influence it. The statistical analysis used was multivariable truncated spline, in order to analyze the relationship between variables. The research that has been done showed that the best model obtained by using three knot points. This model produced a minimum GCV value of 44.46 and the value of determination coefficient of 58.627%. The parameter test showed that all factors used were significantly influence the National Examination Score for Senior High School students in West Lombok Regency year 2017. The variables were as follows: National Examination Score of Junior High School; School or Madrasah Examination Score; the value of Student’s Report Card; Student’s House Distance to School; and Number of Student’s Siblings.

  6. Solving the nonlinear Schrödinger equation using cubic B-spline interpolation and finite difference methods

    Science.gov (United States)

    Ahmad, Azhar; Azmi, Amirah; Majid, Ahmad Abd.; Hamid, Nur Nadiah Abd

    2017-08-01

    In this paper, Nonlinear Schrödinger (NLS) equation with Neumann boundary conditions is solved using finite difference method (FDM) and cubic B-spline interpolation method (CuBSIM). First, the approach is based on the FDM applied on the time and space discretization with the help of theta-weighted method. However, our main interest is the second approach, whereby FDM is applied on the time discretization and cubic B-spline is utilized as an interpolation function in the space dimension with the same help of theta-weighted method. The CuBSIM is shown to be stable by using von Neumann stability analysis. The proposed method is tested on a test problem with single soliton motion of the NLS equation. The accuracy of the numerical results is measured by the Euclidean-norm and infinity-norm. CuBSIM is found to produce more accurate results than the FDM.

  7. Smooth ROC curves and surfaces for markers subject to a limit of detection using monotone natural cubic splines.

    Science.gov (United States)

    Bantis, Leonidas E; Tsimikas, John V; Georgiou, Stelios D

    2013-09-01

    The use of ROC curves in evaluating a continuous or ordinal biomarker for the discrimination of two populations is commonplace. However, in many settings, marker measurements above or below a certain value cannot be obtained. In this paper, we study the construction of a smooth ROC curve (or surface in the case of three populations) when there is a lower or upper limit of detection. We propose the use of spline models that incorporate monotonicity constraints for the cumulative hazard function of the marker distribution. The proposed technique is computationally stable and simulation results showed a satisfactory performance. Other observed covariates can be also accommodated by this spline-based approach. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Numerical simulation of two dimensional sine-Gordon solitons using modified cubic B-spline differential quadrature method

    Directory of Open Access Journals (Sweden)

    H. S. Shukla

    2015-01-01

    Full Text Available In this paper, a modified cubic B-spline differential quadrature method (MCB-DQM is employed for the numerical simulation of two-space dimensional nonlinear sine-Gordon equation with appropriate initial and boundary conditions. The modified cubic B-spline works as a basis function in the differential quadrature method to compute the weighting coefficients. Accordingly, two dimensional sine-Gordon equation is transformed into a system of second order ordinary differential equations (ODEs. The resultant system of ODEs is solved by employing an optimal five stage and fourth-order strong stability preserving Runge–Kutta scheme (SSP-RK54. Numerical simulation is discussed for both damped and undamped cases. Computational results are found to be in good agreement with the exact solution and other numerical results available in the literature.

  9. Cubic Splines for Trachea and Bronchial Tubes Grid Generation

    Directory of Open Access Journals (Sweden)

    Eliandro Rodrigues Cirilo

    2006-02-01

    Full Text Available Grid generation plays an important role in the development of efficient numerical techniques for solving complex flows. Therefore, the present work develops a method for bidimensional blocks structured grid generation for geometries such as the trachea and bronchial tubes. A set of 55 blocks completes the geometry, whose contours are defined by cubic splines. Besides, this technique build on early ones because of its simplicity and efficiency in terms of very complex geometry grid generation.

  10. Uncertainty Quantification using Epi-Splines and Soft Information

    Science.gov (United States)

    2012-06-01

    prediction of the behavior of constructed models of phenomena in physics, 1 biology, chemistry, ecology, engineered sytems , politics, etc. ... Results...spline framework being applied to one of the most common, yet most complex, systems known – the human body . Chapter 5 concludes the thesis by...complex a system known to man than that of the human body . The number of variables im- pacting the performance of one human over another in a given

  11. Numerical simulation of Burgers' equation using cubic B-splines

    Science.gov (United States)

    Lakshmi, C.; Awasthi, Ashish

    2017-03-01

    In this paper, a numerical θ scheme is proposed for solving nonlinear Burgers' equation. By employing Hopf-Cole transformation, the nonlinear Burgers' equation is linearized to the linear Heat equation. The resulting Heat equation is further solved by cubic B-splines. The time discretization of linear Heat equation is carried out using Crank-Nicolson scheme (θ = {1 \\over 2}) as well as backward Euler scheme (θ = 1). Accuracy in temporal direction is improved by using Richardson extrapolation. This method hence possesses fourth order accuracy both in space and time. The system of matrix which arises by using cubic splines is always diagonal. Therefore, working with splines has the advantage of reduced computational cost and easy implementation. Stability of the schemes have been discussed in detail and shown to be unconditionally stable. Three examples have been examined and the L2 and L∞ error norms have been calculated to establish the performance of the method. The numerical results obtained on applying this method have shown to give more accurate results than existing works of Kutluay et al. [1], Ozis et al. [2], Dag et al. [3], Salkuyeh et al. [4] and Korkmaz et al. [5].

  12. The Nuisance of Nuisance Regression: Spectral Misspecification in a Common Approach to Resting-State fMRI Preprocessing Reintroduces Noise and Obscures Functional Connectivity

    OpenAIRE

    Hallquist, Michael N.; Hwang, Kai; Luna, Beatriz

    2013-01-01

    Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent...

  13. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  14. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  15. A spectral/B-spline method for the Navier-Stokes equations in unbounded domains

    International Nuclear Information System (INIS)

    Dufresne, L.; Dumas, G.

    2003-01-01

    The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the θ and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r→∞. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the local character of the B-splines allows for a great flexibility in node positioning while keeping narrow bandwidth matrices. Numerical tests show that the present method compares advantageously with other similar methodologies using purely global expansions

  16. Segmentation of ultrasound images of the carotid using RANSAC and cubic splines.

    Science.gov (United States)

    Rocha, Rui; Campilho, Aurélio; Silva, Jorge; Azevedo, Elsa; Santos, Rosa

    2011-01-01

    A new algorithm is proposed for the semi-automatic segmentation of the near-end and the far-end adventitia boundary of the common carotid artery in ultrasound images. It uses the random sample consensus method to estimate the most significant cubic splines fitting the edge map of a longitudinal section. The consensus of the geometric model (a spline) is evaluated through a new gain function, which integrates the responses to different discriminating features of the carotid boundary: the proximity of the geometric model to any edge or to valley shaped edges; the consistency between the orientation of the normal to the geometric model and the intensity gradient; and the distance to a rough estimate of the lumen boundary. A set of 50 longitudinal B-mode images of the common carotid and their manual segmentations performed by two medical experts were used to assess the performance of the method. The image set was taken from 25 different subjects, most of them having plaques of different classes (class II to class IV), sizes and shapes. The quantitative evaluation showed promising results, having detection errors similar to the ones observed in manual segmentations for 95% of the far-end boundaries and 73% of the near-end boundaries. 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Meta-analysis and meta-regression of hypothalamic-pituitary-adrenal axis activity in functional somatic disorders

    NARCIS (Netherlands)

    Tak, Lineke M.; Cleare, Anthony J.; Ormel, Johan; Manoharan, Andiappan; Kok, Iris C.; Wessely, Simon; Rosmalen, Judith G. M.

    Dysfunction of the hypothalamic-pituitary-adrenal (HPA) axis is the most investigated biological risk marker in functional somatic disorders (FSDs), such as chronic fatigue syndrome (CFS), fibromyalgia (FM), and irritable bowel syndrome (IBS). Our aim was to assess whether there is an association

  18. Determination of regression functions for the charging and discharging processes of valve regulated lead-acid batteries

    OpenAIRE

    Vukić, Vladimir Đ.

    2012-01-01

    Following a deep discharge of AGM SVT 300 valve-regulated lead-acid batteries using the ten-hour discharge current, the batteries were charged using variable current. In accordance with the obtained results, exponential and polynomial functions for the approximation of the specified processes were analyzed. The main evaluation instrument for the quality of the implemented approximations was the adjusted coefficient of determination R-2. It was perceived that the battery discharge process migh...

  19. La obtención y proyección de tablas de mortalidad empleando curvas. Spline

    Directory of Open Access Journals (Sweden)

    Alejandro MINA-VALDÉS

    2011-01-01

    Full Text Available Una de las herramientas del análisis numérico es el uso de polinomios de n-ésimo orden para interpolar entre n + 1 puntos, teniéndose casos en donde estas funciones polinómicas pueden llevar a resultados erróneos. Una alternativa es la de aplicar polinomios de orden inferior a subconjuntos de datos. Estos polinomios conectados se llaman funciones de interpolación segmentaria (spline functions. En este artículo se presenta la herramienta que el análisis numérico proporciona como instrumento técnico necesario para llevar a cabo todos los procedimientos matemáticos existentes con base a algoritmos que permitan su simulación o cálculo, en especial, las funciones splines definidas a trozos (por tramos, con interpolación mediante ellas, dando lugar a el ajuste de curvas spline con base en la serie de sobrevivientes lx de una tabla abreviada de mortalidad mexicana, con el fin de desagregarla por edad desplegada, respetando las concavidades que por el efecto de la mortalidad en las primeras edades y en las siguientes se tienen en la experiencia mexicana. También empleando las curvas splines se presentan las simulaciones que permiten obtener escenarios futuros de las series de sobrevivientes lx, que dan lugar a las proyecciones de la mortalidad mexicana para los años 2010-2050, las que generan las tablas completas de mortalidad para hombres y mujeres de dicho periodo, resaltando las diferencias entre sexos y edades de sus probabilidades de supervivencia y las ganancias en las esperanzas de vida.

  20. The impact of age on the postoperative response of the diastolic function and left ventricular mass regression after surgical or transcatheter aortic valve replacement for severe aortic stenosis.

    Science.gov (United States)

    Nakamura, Teruya; Toda, Koichi; Kuratani, Toru; Miyagawa, Shigeru; Yoshikawa, Yasushi; Fukushima, Satsuki; Saito, Shunsuke; Sawa, Yoshiki

    2017-06-01

    We examined the impact of advanced age on left ventricular mass regression and the change in the diastolic function after aortic valve replacement in patients with aortic stenosis. The present study included 129 patients who underwent either surgical or transcatheter aortic valve replacement and 1-year postoperative echocardiography. The patient characteristics and echocardiographic findings were compared between patients who were regression was significantly greater (p = 0.02) and diastolic dysfunction was less prevalent in group Y (p = 0.02) in comparison to group O. The change in E/e' was significantly correlated with the left ventricular mass regression in group Y (p = 0.02), but not in Group O (p = 0.21). The patients in group O were less susceptible to improvements in myocardial remodeling and the diastolic function in comparison to those in group Y. The altered physiological response to aortic valve replacement might help to determine the appropriate timing of surgery in elderly patients.

  1. Full-turn symplectic map from a generator in a Fourier-spline basis

    International Nuclear Information System (INIS)

    Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.

    1993-04-01

    Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton's method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10 7 turns on an IBM RS6000 model 320 workstation, in a run of about one day

  2. A numerical solution of the linear Boltzmann equation using cubic B-splines.

    Science.gov (United States)

    Khurana, Saheba; Thachuk, Mark

    2012-03-07

    A numerical method using cubic B-splines is presented for solving the linear Boltzmann equation. The collision kernel for the system is chosen as the Wigner-Wilkins kernel. A total of three different representations for the distribution function are presented. Eigenvalues and eigenfunctions of the collision matrix are obtained for various mass ratios and compared with known values. Distribution functions, along with first and second moments, are evaluated for different mass and temperature ratios. Overall it is shown that the method is accurate and well behaved. In particular, moments can be predicted with very few points if the representation is chosen well. This method produces sparse matrices, can be easily generalized to higher dimensions, and can be cast into efficient parallel algorithms. © 2012 American Institute of Physics

  3. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  4. Thin-plate spline analysis of mandibular growth.

    Science.gov (United States)

    Franchi, L; Baccetti, T; McNamara, J A

    2001-04-01

    The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.

  5. Constrained robust estimation of magnetotelluric impedance functions based on a bounded-influence regression M-estimator and the Hilbert transform

    Directory of Open Access Journals (Sweden)

    D. Sutarno

    2008-03-01

    Full Text Available Robust impedance estimation procedures are now in standard use in magnetotelluric (MT measurements and research. These always yield impedance estimates which are better than the conventional least square (LS estimation because the 'real' MT data almost never satisfy the statistical assumptions of Gaussian distribution upon which normal spectral analysis is based. The robust estimation procedures are commonly based on M-estimators that have the ability to reduce the influence of unusual data (outliers in the response (electric field variables, but are often not sensitive to exceptional predictors (magnetic field data, which are termed leverage points.

    This paper proposes an alternative procedure for making reliably robust estimates of MT impedance functions, which simultaneously provide protection from the influence of outliers in both response and input variables. The means for accomplishing this is based on the bounded-influence regression M-estimation and the Hilbert Transform operating on the causal MT impedance functions. In the resulting regression estimates, outlier contamination is removed and the self consistency between the real and imaginary parts of the impedance estimates is guaranteed. Using synthetic and real MT data, it is shown that the method can produce improved MT impedance functions even under conditions of severe noise contamination.

  6. Achieving high data reduction with integral cubic B-splines

    Science.gov (United States)

    Chou, Jin J.

    1993-01-01

    During geometry processing, tangent directions at the data points are frequently readily available from the computation process that generates the points. It is desirable to utilize this information to improve the accuracy of curve fitting and to improve data reduction. This paper presents a curve fitting method which utilizes both position and tangent direction data. This method produces G(exp 1) non-rational B-spline curves. From the examples, the method demonstrates very good data reduction rates while maintaining high accuracy in both position and tangent direction.

  7. Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation

    Directory of Open Access Journals (Sweden)

    WEN Chaobin

    2015-01-01

    Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1~2 times and the positional error is maintained from 100~200 m.

  8. Identification of candidate categories of the International Classification of Functioning Disability and Health (ICF for a Generic ICF Core Set based on regression modelling

    Directory of Open Access Journals (Sweden)

    Üstün Bedirhan T

    2006-07-01

    Full Text Available Abstract Background The International Classification of Functioning, Disability and Health (ICF is the framework developed by WHO to describe functioning and disability at both the individual and population levels. While condition-specific ICF Core Sets are useful, a Generic ICF Core Set is needed to describe and compare problems in functioning across health conditions. Methods The aims of the multi-centre, cross-sectional study presented here were: a to propose a method to select ICF categories when a large amount of ICF-based data have to be handled, and b to identify candidate ICF categories for a Generic ICF Core Set by examining their explanatory power in relation to item one of the SF-36. The data were collected from 1039 patients using the ICF checklist, the SF-36 and a Comorbidity Questionnaire. ICF categories to be entered in an initial regression model were selected following systematic steps in accordance with the ICF structure. Based on an initial regression model, additional models were designed by systematically substituting the ICF categories included in it with ICF categories with which they were highly correlated. Results Fourteen different regression models were performed. The variance the performed models account for ranged from 22.27% to 24.0%. The ICF category that explained the highest amount of variance in all the models was sensation of pain. In total, thirteen candidate ICF categories for a Generic ICF Core Set were proposed. Conclusion The selection strategy based on the ICF structure and the examination of the best possible alternative models does not provide a final answer about which ICF categories must be considered, but leads to a selection of suitable candidates which needs further consideration and comparison with the results of other selection strategies in developing a Generic ICF Core Set.

  9. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...

  10. Effectiveness of Combining Statistical Tests and Effect Sizes When Using Logistic Discriminant Function Regression to Detect Differential Item Functioning for Polytomous Items

    Science.gov (United States)

    Gómez-Benito, Juana; Hidalgo, Maria Dolores; Zumbo, Bruno D.

    2013-01-01

    The objective of this article was to find an optimal decision rule for identifying polytomous items with large or moderate amounts of differential functioning. The effectiveness of combining statistical tests with effect size measures was assessed using logistic discriminant function analysis and two effect size measures: R[superscript 2] and…

  11. Physics-based elastic image registration using splines and including landmark localization uncertainties.

    Science.gov (United States)

    Wörz, Stefan; Rohr, Karl

    2006-01-01

    We introduce an elastic registration approach which is based on a physical deformation model and uses Gaussian elastic body splines (GEBS). We formulate an extended energy functional related to the Navier equation under Gaussian forces which also includes landmark localization uncertainties. These uncertainties are characterized by weight matrices representing anisotropic errors. Since the approach is based on a physical deformation model, cross-effects in elastic deformations can be taken into account. Moreover, we have a free parameter to control the locality of the transformation for improved registration of local geometric image differences. We demonstrate the applicability of our scheme based on 3D CT images from the Truth Cube experiment, 2D MR images of the brain, as well as 2D gel electrophoresis images. It turns out that the new scheme achieves more accurate results compared to previous approaches.

  12. COLLINARUS: collection of image-derived non-linear attributes for registration using splines

    Science.gov (United States)

    Chappelow, Jonathan; Bloch, B. Nicolas; Rofsky, Neil; Genega, Elizabeth; Lenkinski, Robert; DeWolf, William; Viswanath, Satish; Madabhushi, Anant

    2009-02-01

    We present a new method for fully automatic non-rigid registration of multimodal imagery, including structural and functional data, that utilizes multiple texutral feature images to drive an automated spline based non-linear image registration procedure. Multimodal image registration is significantly more complicated than registration of images from the same modality or protocol on account of difficulty in quantifying similarity between different structural and functional information, and also due to possible physical deformations resulting from the data acquisition process. The COFEMI technique for feature ensemble selection and combination has been previously demonstrated to improve rigid registration performance over intensity-based MI for images of dissimilar modalities with visible intensity artifacts. Hence, we present here the natural extension of feature ensembles for driving automated non-rigid image registration in our new technique termed Collection of Image-derived Non-linear Attributes for Registration Using Splines (COLLINARUS). Qualitative and quantitative evaluation of the COLLINARUS scheme is performed on several sets of real multimodal prostate images and synthetic multiprotocol brain images. Multimodal (histology and MRI) prostate image registration is performed for 6 clinical data sets comprising a total of 21 groups of in vivo structural (T2-w) MRI, functional dynamic contrast enhanced (DCE) MRI, and ex vivo WMH images with cancer present. Our method determines a non-linear transformation to align WMH with the high resolution in vivo T2-w MRI, followed by mapping of the histopathologic cancer extent onto the T2-w MRI. The cancer extent is then mapped from T2-w MRI onto DCE-MRI using the combined non-rigid and affine transformations determined by the registration. Evaluation of prostate registration is performed by comparison with the 3 time point (3TP) representation of functional DCE data, which provides an independent estimate of cancer

  13. Modeling and testing treated tumor growth using cubic smoothing splines.

    Science.gov (United States)

    Kong, Maiying; Yan, Jun

    2011-07-01

    Human tumor xenograft models are often used in preclinical study to evaluate the therapeutic efficacy of a certain compound or a combination of certain compounds. In a typical human tumor xenograft model, human carcinoma cells are implanted to subjects such as severe combined immunodeficient (SCID) mice. Treatment with test compounds is initiated after tumor nodule has appeared, and continued for a certain time period. Tumor volumes are measured over the duration of the experiment. It is well known that untreated tumor growth may follow certain patterns, which can be described by certain mathematical models. However, the growth patterns of the treated tumors with multiple treatment episodes are quite complex, and the usage of parametric models is limited. We propose using cubic smoothing splines to describe tumor growth for each treatment group and for each subject, respectively. The proposed smoothing splines are quite flexible in modeling different growth patterns. In addition, using this procedure, we can obtain tumor growth and growth rate over time for each treatment group and for each subject, and examine whether tumor growth follows certain growth pattern. To examine the overall treatment effect and group differences, the scaled chi-squared test statistics based on the fitted group-level growth curves are proposed. A case study is provided to illustrate the application of this method, and simulations are carried out to examine the performances of the scaled chi-squared tests. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Momentum analysis by using a quintic spline model for the track

    CERN Document Server

    Wind, H

    1974-01-01

    A method is described to determine the momentum of a particle when the (inhomogeneous) analysing magnetic field and the position of at least three points on the track are known. The model of the field is essentially a cubic spline and that of the track a quintic spline. (8 refs).

  15. A Multidimensional Spline Based Global Nonlinear Aerodynamic Model for the Cessna Citation II

    NARCIS (Netherlands)

    De Visser, C.C.; Mulder, J.A.

    2010-01-01

    A new method is proposed for the identification of global nonlinear models of aircraft non-dimensional force and moment coefficients. The method is based on a recent type of multivariate spline, the multivariate simplex spline, which can accurately approximate very large, scattered nonlinear

  16. B-LUT: Fast and low memory B-spline image interpolation.

    Science.gov (United States)

    Sarrut, David; Vandemeulebroucke, Jef

    2010-08-01

    We propose a fast alternative to B-splines in image processing based on an approximate calculation using precomputed B-spline weights. During B-spline indirect transformation, these weights are efficiently retrieved in a nearest-neighbor fashion from a look-up table, greatly reducing overall computation time. Depending on the application, calculating a B-spline using a look-up table, called B-LUT, will result in an exact or approximate B-spline calculation. In case of the latter the obtained accuracy can be controlled by the user. The method is applicable to a wide range of B-spline applications and has very low memory requirements compared to other proposed accelerations. The performance of the proposed B-LUTs was compared to conventional B-splines as implemented in the popular ITK toolkit for the general case of image intensity interpolation. Experiments illustrated that highly accurate B-spline approximation can be obtained all while computation time is reduced with a factor of 5-6. The B-LUT source code, compatible with the ITK toolkit, has been made freely available to the community. 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  18. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interfac...... functionals. The software presented here is implemented in the riskRegression package.......In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...

  19. Air Pollution and Lung Function in Dutch Children: A Comparison of Exposure Estimates and Associations Based on Land Use Regression and Dispersion Exposure Modeling Approaches.

    Science.gov (United States)

    Wang, Meng; Gehring, Ulrike; Hoek, Gerard; Keuken, Menno; Jonkers, Sander; Beelen, Rob; Eeftens, Marloes; Postma, Dirkje S; Brunekreef, Bert

    2015-08-01

    There is limited knowledge about the extent to which estimates of air pollution effects on health are affected by the choice for a specific exposure model. We aimed to evaluate the correlation between long-term air pollution exposure estimates using two commonly used exposure modeling techniques [dispersion and land use regression (LUR) models] and, in addition, to compare the estimates of the association between long-term exposure to air pollution and lung function in children using these exposure modeling techniques. We used data of 1,058 participants of a Dutch birth cohort study with measured forced expiratory volume in 1 sec (FEV1), forced vital capacity (FVC), and peak expiratory flow (PEF) measurements at 8 years of age. For each child, annual average outdoor air pollution exposure [nitrogen dioxide (NO2), mass concentration of particulate matter with diameters ≤ 2.5 and ≤ 10 μm (PM2.5, PM10), and PM2.5 soot] was estimated for the current addresses of the participants by a dispersion and a LUR model. Associations between exposures to air pollution and lung function parameters were estimated using linear regression analysis with confounder adjustment. Correlations between LUR- and dispersion-modeled pollution concentrations were high for NO2, PM2.5, and PM2.5 soot (R = 0.86-0.90) but low for PM10 (R = 0.57). Associations with lung function were similar for air pollutant exposures estimated using LUR and dispersion modeling, except for associations of PM2.5 with FEV1 and FVC, which were stronger but less precise for exposures based on LUR compared with dispersion model. Predictions from LUR and dispersion models correlated very well for PM2.5, NO2, and PM2.5 soot but not for PM10. Health effect estimates did not depend on the type of model used to estimate exposure in a population of Dutch children.

  20. Research on None Uniform Rational B-spline Surface and Agent for Numerically Controlled Layout Design

    Directory of Open Access Journals (Sweden)

    Zhigang XU

    2014-07-01

    Full Text Available Research on the integrated NC conceptual layout design (I-NCC concerned with a broader area of interests. The key issues of I-NCC system are associated with NURBS and agent. Firstly, formulas for the derivatives and normal vectors of non-rational B-spline and NURBS are proved based on de BOOR’s recursive formula. Compared with the existing approaches targeting at the non-rational B- spline basis functions, these equations are directly targeted at the controlling points, so the algorithms and programs for NURBS curve and surface can also be applied to the derivatives and normals, the calculating performance is increased. A simplified equation is also proved in this paper. Secondly, the NC conceptual configuration design is transformed into a 3D cuboids layout problem by the introduction of three typical modules: translation module, rotation module and base module based on the analysis of the normal unit vector of work piece surface (in NURBS format. 3D cuboids layout problem is viewed as a generalization of the quadratic assignment problem and therefore belongs to the class of NP hard problems. Apart from the complexity and variety of 3D layout optimization algorithms, this paper introduces agent oriented cooperative design system. Agent models and the corresponding design management systems are put forward to deal with the creative NC layout design. Though the key theoretical issues are now applied to the NC system design, there should be more industrial applications because of the prevalent proliferation nature of NURBS and agent.

  1. Magnetotelluric (MT) data smoothing based on B-Spline algorithm and qualitative spectral analysis

    Science.gov (United States)

    Handyarso, Accep; Grandis, Hendra

    2017-07-01

    Data processing is one of the essential steps to obtain optimum response function of the Earth's subsurface. The MT Data processing is based on the Fast Fourier Transform (FFT) algorithm which converts the time series data into its frequency domain counterpart. The FFT combined with statistical algorithm constitute the Robust Processing algorithm which is widely implemented in MT data processing software. The Robust Processing has three variants, i.e. No Weight (NW), Rho Variance (RV), and Ordinary Coherency (OC). The RV and OC options allow for denoising the data but in many cases the Robust Processing still results in not so smooth sounding curve due to strong noise presence during measurement, such that the Crosspower (XPR) analysis must be conducted in the data processing. The XPR analysis is very time consuming step within the data processing. The collaboration of B-Spline algorithm and Qualitative Spectral Analysis in the frequency domain could be of advantages as an alternative for these steps. The technique is started by using the best coherency from the Robust Processing results. In the Qualitative Spectral Analysis one can determine which part of the data based on frequency that is more or less reliable, then the next process invokes B-Spline algorithm for data smoothing. This algorithm would select the best fit of the data trend in the frequency domain. The smooth apparent resistivity and phase sounding curves can be considered as more appropriate to represent the subsurface. This algorithm has been applied to the real MT data from several survey and give satisfactory results.

  2. A Quadratic Spline based Interface (QUASI) reconstruction algorithm for accurate tracking of two-phase flows

    Science.gov (United States)

    Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.

    2009-12-01

    A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.

  3. Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis

    Energy Technology Data Exchange (ETDEWEB)

    Fillion-Gourdeau, F., E-mail: filliong@CRM.UMontreal.ca [Université du Québec, INRS – Énergie, Matériaux et Télécommunications, Varennes, J3X 1S2 (Canada); Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4 (Canada); Lorin, E., E-mail: elorin@math.carleton.ca [School of Mathematics and Statistics, Carleton University, Ottawa, K1S 5B6 (Canada); Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4 (Canada); Bandrauk, A.D., E-mail: andre.bandrauk@usherbrooke.ca [Laboratoire de Chimie Théorique, Faculté des Sciences, Université de Sherbrooke, Sherbrooke, J1K 2R1 (Canada); Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4 (Canada)

    2016-02-15

    A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.

  4. Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis

    International Nuclear Information System (INIS)

    Fillion-Gourdeau, F.; Lorin, E.; Bandrauk, A.D.

    2016-01-01

    A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.

  5. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    Science.gov (United States)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  6. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  7. The modeling of quadratic B-splines surfaces for the tomographic reconstruction in the FCC- type-riser

    International Nuclear Information System (INIS)

    Vasconcelos, Geovane Vitor; Dantas, Carlos Costa; Melo, Silvio de Barros; Pires, Renan Ferraz

    2009-01-01

    The 3D tomography reconstruction has been a profitable alternative in the analysis of the FCC-type- riser (Fluid Catalytic Cracking), for appropriately keeping track of the sectional catalyst concentration distribution in the process of oil refining. The method of tomography reconstruction proposed by M. Azzi and colleagues (1991) uses a relatively small amount of trajectories (from 3 to 5) and projections (from 5 to 7) of gamma rays, a desirable feature in the industrial process tomography. Compared to more popular methods, such as the FBP (Filtered Back Projection), which demands a much higher amount of gamma rays projections, the method by Azzi et al. is more appropriate for the industrial process, where the physical limitations and the cost of the process require more economical arrangements. The use of few projections and trajectories facilitates the diagnosis in the flow dynamical process. This article proposes an improvement in the basis functions introduced by Azzi et al., through the use of quadratic B-splines functions. The use of B-splines functions makes possible a smoother surface reconstruction of the density distribution, since the functions are continuous and smooth. This work describes how the modeling can be done. (author)

  8. lordif: An R Package for Detecting Differential Item Functioning Using Iterative Hybrid Ordinal Logistic Regression/Item Response Theory and Monte Carlo Simulations.

    Science.gov (United States)

    Choi, Seung W; Gibbons, Laura E; Crane, Paul K

    2011-03-01

    Logistic regression provides a flexible framework for detecting various types of differential item functioning (DIF). Previous efforts extended the framework by using item response theory (IRT) based trait scores, and by employing an iterative process using group-specific item parameters to account for DIF in the trait scores, analogous to purification approaches used in other DIF detection frameworks. The current investigation advances the technique by developing a computational platform integrating both statistical and IRT procedures into a single program. Furthermore, a Monte Carlo simulation approach was incorporated to derive empirical criteria for various DIF statistics and effect size measures. For purposes of illustration, the procedure was applied to data from a questionnaire of anxiety symptoms for detecting DIF associated with age from the Patient-Reported Outcomes Measurement Information System.

  9. Regional Densification of a Global VTEC Model Based on B-Spline Representations

    Science.gov (United States)

    Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas

    2017-04-01

    both directions. The spectral resolution of both model parts is defined by the number of B-spline basis functions introduced for longitude and latitude directions related to appropriate coordinate systems. Furthermore, the TLVM has to be developed under the postulation that the global model part will be computed continuously in near real-time (NRT) and routinely predicted into the future by an algorithm based on deterministic and statistical forecast models. Thus, the additional regional densification model part, which will be computed also in NRT, but possibly only for a specified time duration, must be estimated independently from the global one. For that purpose a data separation procedure has to be developed in order to estimate the unknown series coefficients of both model parts independently. This procedure must also consider additional technique-dependent unknowns such as the Differential Code Biases (DCBs) within GNSS and intersystem biases. In this contribution we will present the concept to set up the TLVM including the data combination and the Kalman filtering procedure; first numerical results will be presented.

  10. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    Science.gov (United States)

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  11. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  12. Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System

    Directory of Open Access Journals (Sweden)

    Guang Zhao

    2008-01-01

    Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.

  13. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  14. Higher-order numerical solutions using cubic splines. [for partial differential equations

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-

  15. Television images identification in the vision system basis on the mathematical apparatus of cubic normalized B-splines

    Directory of Open Access Journals (Sweden)

    Krutov Vladimir

    2017-01-01

    Full Text Available The solution the task of television image identification is used in industry when creating autonomous robots and systems of technical vision. A similar problem also arises in the development of image analysis systems to function under the influence of various interfering factors in complex observation conditions complicated the registration process and existing when priori information is absent, in background noise type. One of the most important operators is the contour selection operator. Methods and algorithms of processing information from image sensors must take into account the different character of noise associated with images and signals registration. The solution of the task of isolating contours, and in fact of digital differentiation of two-dimensional signals registered against a different character of background noise, is far from trivial. This is due to the fact that such task is incorrect. In modern information systems, methods of numerical differentiation or masks are usually used to solve the task of isolating contours. The paper considers a new method of differentiating measurement results against a noise background using the modern mathematical apparatus of cubic smoothing B-splines. The new high-precision method of digital differentiation of signals using splines is proposed for the first time, without using standard numerical differentiation procedures, to calculate the values of the derivatives with high accuracy. In fact, a method has been developed for calculating the image gradient module using spline differentiation. The method, as proved by experimental studies, and computational experiments has higher noise immunity than algorithms based on standard differentiation procedures using masks.

  16. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  17. Nonparametric functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yang, Jie; Wu, Rongling; Casella, George

    2009-03-01

    Functional mapping is a useful tool for mapping quantitative trait loci (QTL) that control dynamic traits. It incorporates mathematical aspects of biological processes into the mixture model-based likelihood setting for QTL mapping, thus increasing the power of QTL detection and the precision of parameter estimation. However, in many situations there is no obvious functional form and, in such cases, this strategy will not be optimal. Here we propose to use nonparametric function estimation, typically implemented with B-splines, to estimate the underlying functional form of phenotypic trajectories, and then construct a nonparametric test to find evidence of existing QTL. Using the representation of a nonparametric regression as a mixed model, the final test statistic is a likelihood ratio test. We consider two types of genetic maps: dense maps and general maps, and the power of nonparametric functional mapping is investigated through simulation studies and demonstrated by examples.

  18. Numerical treatment of Hunter Saxton equation using cubic trigonometric B-spline collocation method

    Science.gov (United States)

    Hashmi, M. S.; Awais, Muhammad; Waheed, Ammarah; Ali, Qutab

    2017-09-01

    In this article, authors proposed a computational model based on cubic trigonometric B-spline collocation method to solve Hunter Saxton equation. The nonlinear second order partial differential equation arises in modeling of nematic liquid crystals and describes some aspects of orientation wave. The problem is decomposed into system of linear equations using cubic trigonometric B-spline collocation method with quasilinearization. To show the efficiency of the proposed method, two numerical examples have been tested for different values of t. The results are described using error tables and graphs and compared with the results existed in literature. It is evident that results are in good agreement with analytical solution and better than Arbabi, Nazari, and Davishi, Optik 127, 5255-5258 (2016). In current problem, it is also observed that the cubic trigonometric B-spline gives better results as compared to cubic B-spline.

  19. Cubic B-spline solution for two-point boundary value problem with AOR iterative method

    Science.gov (United States)

    Suardi, M. N.; Radzuan, N. Z. F. M.; Sulaiman, J.

    2017-09-01

    In this study, the cubic B-spline approximation equation has been derived by using the cubic B-spline discretization scheme to solve two-point boundary value problems. In addition to that, system of cubic B-spline approximation equations is generated from this spline approximation equation in order to get the numerical solutions. To do this, the Accelerated Over Relaxation (AOR) iterative method has been used to solve the generated linear system. For the purpose of comparison, the GS iterative method is designated as a control method to compare between SOR and AOR iterative methods. There are two examples of proposed problems that have been considered to examine the efficiency of these proposed iterative methods via three parameters such as their number of iterations, computational time and maximum absolute error. The numerical results are obtained from these iterative methods, it can be concluded that the AOR iterative method is slightly efficient as compared with SOR iterative method.

  20. Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2017-06-01

    Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The

  1. Evaluation of hepatic function by the blood retention of sup 99m Tc-DTPA-galactosyl-human serum albumin using biexponential regression analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ha-Kawa, Sang Kil; Kojima, Michimasa; Tanaka, Yoshimasa; Kitagawa, Shin-ichi; Kubota, Yoshitsugu; Inoue, Kyoichi (Kansai Medical School, Moriguchi, Osaka (Japan))

    1991-10-01

    Three normal volunteers and 19 patients with chronic liver disease underwent a new RI examination using {sup 99m}Tc-diethylenetriaminepentaacetic acid-galactosyl-human serum albumin ({sup 99m}Tc-GSA). Blood retention rates of {sup 99m}Tc-GSA(%ID) were determined using the heart regression curve and by extrapolation technique following a single intravenous injection of the ligand (1 mg/185 MBq). The %ID in the blood 60 min after the injection showed significant correlations with serum albumin (p<0.001), retention rate of ICG at 15 min (p<0.001), plasma disappearance rate of ICG (p<0.001), prothrombin time (p<0.01), hepaplastin test (p<0.05), and total bilirubin (p<0.005). Increased %IDs were observed in accordance with severities of the liver disease. Our method for the estimation of the blood retention of {sup 99m}Tc-GSA is considered useful in evaluating the hepatic function. (author).

  2. Representation of a human head with bi-cubic B-splines technique based on the laser scanning technique in 3D surface anthropometry.

    Science.gov (United States)

    Zhang, B; Molenbroek, J F M

    2004-09-01

    Three-dimensional (3D) anthropometry based on the laser scanning technique not only provides one-dimensional measurements calculated in accordance with the landmarks which are pre-located on the human body surface manually, but also the 3D shape information between the landmarks. This new technique used in recent ergonomic research has brought new challenges to resolving the application problem that was generally avoided by anthropometric experts in their researches. The current research problem is concentrating on how to shift and develop one-dimensional measurements (1D landmarks) into three-dimensional measurements (3D land-surfaces). The main purpose of this paper is to test whether the function of B-splines can be used to fit 3D scanned human heads, and to for further study to develop a computer aided ergonomic design tool (CAED). The result shows that B-splines surfaces can effectively reconstruct 3D human heads based on the laser scanning technique.

  3. Solving the nonlinear Schrödinger equation using cubic B-spline interpolation and finite difference methods on dual solitons

    Science.gov (United States)

    Ahmad, Azhar; Azmi, Amirah; Majid, Ahmad Abd.; Hamid, Nur Nadiah Abd

    2017-04-01

    In this paper, Nonlinear Schrödinger (NLS) equation with Neumann boundary conditions is solved using cubic B-spline interpolation method (CuBSIM) and finite difference method (FDM). Firstly, FDM is applied on the time discretization and cubic B-spline is utilized as an interpolation function in the space dimension with the help of theta-weighted method. The second approach is based on the FDM applied on the time and space discretization with the help of theta-weighted method. The CuBSIM is shown to be stable by using von Neumann stability analysis. The proposed method is tested on the interaction of the dual solitons of the NLS equation. The accuracy of the numerical results is measured by the Euclidean-norm and infinity-norm. CuBSIM is found to produce more accurate results than the FDM.

  4. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  5. An algorithm based on a new DQM with modified extended cubic B-splines for numerical study of two dimensional hyperbolic telegraph equation

    Directory of Open Access Journals (Sweden)

    Brajesh Kumar Singh

    2018-03-01

    Full Text Available In this paper, a new approach “modified extended cubic B-Spline differential quadrature (mECDQ method” has been developed for the numerical computation of two dimensional hyperbolic telegraph equation. The mECDQ method is a DQM based on modified extended cubic B-spline functions as new base functions. The mECDQ method reduces the hyperbolic telegraph equation into an amenable system of ordinary differential equations (ODEs, in time. The resulting system of ODEs has been solved by adopting an optimal five stage fourth-order strong stability preserving Runge - Kutta (SSP-RK54 scheme. The stability of the method is also studied by computing the eigenvalues of the coefficient matrices. It is shown that the mECDQ method produces stable solution for the telegraph equation. The accuracy of the method is illustrated by computing the errors between analytical solutions and numerical solutions are measured in terms of L2 and L∞ and average error norms for each problem. A comparison of mECDQ solutions with the results of the other numerical methods has been carried out for various space sizes and time step sizes, which shows that the mECDQ solutions are converging very fast in comparison with the various existing schemes. Keywords: Differential quadrature method, Hyperbolic telegraph equation, Modified extended cubic B-splines, mECDQ method, Thomas algorithm

  6. Ridge Regression: A Panacea?

    Science.gov (United States)

    Walton, Joseph M.; And Others

    1978-01-01

    Ridge regression is an approach to the problem of large standard errors of regression estimates of intercorrelated regressors. The effect of ridge regression on the estimated squared multiple correlation coefficient is discussed and illustrated. (JKS)

  7. Two EXCEL macros for tracing deviated boreholes using cubic splines and calculation of formation depth and thickness

    Science.gov (United States)

    Ozkaya, Sait Ismail

    1995-08-01

    Two short EXCEL function macros are presented for calculation of borehole deviation, true vertical thickness, and true stratigraphic thickness. The function macros can be used as regular EXCEL functions. The calling formula, arguments, and their type are described and application is demonstrated on an example data set. The borehole bearing and drift between any two observation points are estimated by fitting a cubic spline curve to three adjacent observation points at a time. The macro can cope with horizontal wells. The macro expects dip; dip direction at formation tops; and x, y, and z components of the distance from point P 1 to point P 2 where P 1 and P 2 are the intersections of the borehole with the top and bottom of a formation, respectively. The macro returns true stratigraphic thickness of formations. Coordinates of points P 1 and P 2 are obtained from the results returned by the macro.

  8. Implementation of D-Spline-Based Incremental Performance Parameter Estimation Method with ppOpen-AT

    Directory of Open Access Journals (Sweden)

    Teruo Tanaka

    2014-01-01

    Full Text Available In automatic performance tuning (AT, a primary aim is to optimize performance parameters that are suitable for certain computational environments in ordinary mathematical libraries. For AT, an important issue is to reduce the estimation time required for optimizing performance parameters. To reduce the estimation time, we previously proposed the Incremental Performance Parameter Estimation method (IPPE method. This method estimates optimal performance parameters by inserting suitable sampling points that are based on computational results for a fitting function. As the fitting function, we introduced d-Spline, which is highly adaptable and requires little estimation time. In this paper, we report the implementation of the IPPE method with ppOpen-AT, which is a scripting language (set of directives with features that reduce the workload of the developers of mathematical libraries that have AT features. To confirm the effectiveness of the IPPE method for the runtime phase AT, we applied the method to sparse matrix–vector multiplication (SpMV, in which the block size of the sparse matrix structure blocked compressed row storage (BCRS was used for the performance parameter. The results from the experiment show that the cost was negligibly small for AT using the IPPE method in the runtime phase. Moreover, using the obtained optimal value, the execution time for the mathematical library SpMV was reduced by 44% on comparing the compressed row storage and BCRS (block size 8.

  9. FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines

    International Nuclear Information System (INIS)

    Hanson, R.J.; Haskell, K.H.

    1989-01-01

    1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved

  10. Testing the Performance of Cubic Splines and Nelson-Siegel Model for Estimating the Zero-coupon Yield Curve

    Directory of Open Access Journals (Sweden)

    Lorenčič Eva

    2016-06-01

    Full Text Available Understanding the relationship between interest rates and term to maturity of securities is a prerequisite for developing financial theory and evaluating whether it holds up in the real world; therefore, such an understanding lies at the heart of monetary and financial economics. Accurately fitting the term structure of interest rates is the backbone of a smoothly functioning financial market, which is why the testing of various models for estimating and predicting the term structure of interest rates is an important topic in finance that has received considerable attention for many decades. In this paper, we empirically contrast the performance of cubic splines and the Nelson-Siegel model by estimating the zero-coupon yields of Austrian government bonds. The main conclusion that can be drawn from the results of the calculations is that the Nelson-Siegel model outperforms cubic splines at the short end of the yield curve (up to 2 years, whereas for medium-term maturities (2 to 10 years the fitting performance of both models is comparable.

  11. Extension of an iterative hybrid ordinal logistic regression/item response theory approach to detect and account for differential item functioning in longitudinal data.

    Science.gov (United States)

    Mukherjee, Shubhabrata; Gibbons, Laura E; Kristjansson, Elizabeth; Crane, Paul K

    2013-04-01

    Many constructs are measured using multi-item data collection instruments. Differential item functioning (DIF) occurs when construct-irrelevant covariates interfere with the relationship between construct levels and item responses. DIF assessment is an active area of research, and several techniques are available to identify and account for DIF in cross-sectional settings. Many studies include data collected from individuals over time; yet appropriate methods for identifying and accounting for items with DIF in these settings are not widely available. We present an approach to this problem and apply it to longitudinal Modified Mini-Mental State Examination (3MS) data from English speakers in the Canadian Study of Health and Aging. We analyzed 3MS items for DIF with respect to sex, birth cohort and education. First, we focused on cross-sectional data from a subset of Canadian Study of Health and Aging participants who had complete data at all three data collection periods. We performed cross-sectional DIF analyses at each time point using an iterative hybrid ordinal logistic regression/item response theory (OLR/IRT) framework. We found that item-level findings differed at the three time points. We then developed and applied an approach to detecting and accounting for DIF using longitudinal data in which covariation within individuals over time is accounted for by clustering on person. We applied this approach to data for the "entire" dataset of English speaking participants including people who later dropped out or died. Accounting for longitudinal DIF modestly attenuated differences between groups defined by educational attainment. We conclude with a discussion of further directions for this line of research.

  12. Development of temporally refined land-use regression models predicting daily household-level air pollution in a panel study of lung function among asthmatic children.

    Science.gov (United States)

    Johnson, Markey; Macneill, Morgan; Grgicak-Mannion, Alice; Nethery, Elizabeth; Xu, Xiaohong; Dales, Robert; Rasmussen, Pat; Wheeler, Amanda

    2013-01-01

    Regulatory monitoring data and land-use regression (LUR) models have been widely used for estimating individual exposure to ambient air pollution in epidemiologic studies. However, LUR models lack fine-scale temporal resolution for predicting acute exposure and regulatory monitoring provides daily concentrations, but fails to capture spatial variability within urban areas. This study coupled LUR models with continuous regulatory monitoring to predict daily ambient nitrogen dioxide (NO(2)) and particulate matter (PM(2.5)) at 50 homes in Windsor, Ontario. We compared predicted versus measured daily outdoor concentrations for 5 days in winter and 5 days in summer at each home. We also examined the implications of using modeled versus measured daily pollutant concentrations to predict daily lung function among asthmatic children living in those homes. Mixed effect analysis suggested that temporally refined LUR models explained a greater proportion of the spatial and temporal variance in daily household-level outdoor NO(2) measurements compared with daily concentrations based on regulatory monitoring. Temporally refined LUR models captured 40% (summer) and 10% (winter) more of the spatial variance compared with regulatory monitoring data. Ambient PM(2.5) showed little spatial variation; therefore, daily PM(2.5) models were similar to regulatory monitoring data in the proportion of variance explained. Furthermore, effect estimates for forced expiratory volume in 1 s (FEV(1)) and peak expiratory flow (PEF) based on modeled pollutant concentrations were consistent with effects based on household-level measurements for NO(2) and PM(2.5). These results suggest that LUR modeling can be combined with continuous regulatory monitoring data to predict daily household-level exposure to ambient air pollution. Temporally refined LUR models provided a modest improvement in estimating daily household-level NO(2) compared with regulatory monitoring data alone, suggesting that this

  13. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  14. A Bézier-Spline-based Model for the Simulation of Hysteresis in Variably Saturated Soil

    Science.gov (United States)

    Cremer, Clemens; Peche, Aaron; Thiele, Luisa-Bianca; Graf, Thomas; Neuweiler, Insa

    2017-04-01

    Most transient variably saturated flow models neglect hysteresis in the p_c-S-relationship (Beven, 2012). Such models tend to inadequately represent matrix potential and saturation distribution. Thereby, when simulating flow and transport processes, fluid and solute fluxes might be overestimated (Russo et al., 1989). In this study, we present a simple, computationally efficient and easily applicable model that enables to adequately describe hysteresis in the p_c-S-relationship for variably saturated flow. This model can be seen as an extension to the existing play-type model (Beliaev and Hassanizadeh, 2001), where scanning curves are simplified as vertical lines between main imbibition and main drainage curve. In our model, we use continuous linear and Bézier-Spline-based functions. We show the successful validation of the model by numerically reproducing a physical experiment by Gillham, Klute and Heermann (1976) describing primary drainage and imbibition in a vertical soil column. With a deviation of 3%, the simple Bézier-Spline-based model performs significantly better that the play-type approach, which deviates by 30% from the experimental results. Finally, we discuss the realization of physical experiments in order to extend the model to secondary scanning curves and in order to determine scanning curve steepness. {Literature} Beven, K.J. (2012). Rainfall-Runoff-Modelling: The Primer. John Wiley and Sons. Russo, D., Jury, W. A., & Butters, G. L. (1989). Numerical analysis of solute transport during transient irrigation: 1. The effect of hysteresis and profile heterogeneity. Water Resources Research, 25(10), 2109-2118. https://doi.org/10.1029/WR025i010p02109. Beliaev, A.Y. & Hassanizadeh, S.M. (2001). A Theoretical Model of Hysteresis and Dynamic Effects in the Capillary Relation for Two-phase Flow in Porous Media. Transport in Porous Media 43: 487. doi:10.1023/A:1010736108256. Gillham, R., Klute, A., & Heermann, D. (1976). Hydraulic properties of a porous

  15. Modeling positional effects of regulatory sequences with spline transformations increases prediction accuracy of deep neural networks.

    Science.gov (United States)

    Avsec, Žiga; Barekatain, Mohammadamin; Cheng, Jun; Gagneur, Julien

    2017-11-16

    Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries, or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at goo.gl/3yMY5w. avsec@in.tum.de; gagneur@in.tum.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  16. Reconstruction of 4-D dynamic SPECT images from inconsistent projections using a Spline initialized FADS algorithm (SIFADS).

    Science.gov (United States)

    Abdalah, Mahmoud; Boutchko, Rostyslav; Mitra, Debasis; Gullberg, Grant T

    2015-01-01

    In this paper, we propose and validate an algorithm of extracting voxel-by-voxel time activity curves directly from inconsistent projections applied in dynamic cardiac SPECT. The algorithm was derived based on factor analysis of dynamic structures (FADS) approach and imposes prior information by applying several regularization functions with adaptively changing relative weighting. The anatomical information of the imaged subject was used to apply the proposed regularization functions adaptively in the spatial domain. The algorithm performance is validated by reconstructing dynamic datasets simulated using the NCAT phantom with a range of different input tissue time-activity curves. The results are compared to the spline-based and FADS methods. The validated algorithm is then applied to reconstruct pre-clinical cardiac SPECT data from canine and murine subjects. Images, generated from both simulated and experimentally acquired data confirm the ability of the new algorithm to solve the inverse problem of dynamic SPECT with slow gantry rotation.

  17. Digital elevation model production from scanned topographic contour maps via thin plate spline interpolation

    International Nuclear Information System (INIS)

    Soycan, Arzu; Soycan, Metin

    2009-01-01

    GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)

  18. Spline Approximation-Based Optimization of Multi-component Disperse Reinforced Composites

    Directory of Open Access Journals (Sweden)

    Yu. I. Dimitrienko

    2015-01-01

    Full Text Available The paper suggests an algorithm for solving the problems of optimal design of multicomponent disperse-reinforced composite materials, which properties are defined by filler concentrations and are independent of their shape. It formulates the problem of conditional optimization of a composite with restrictions on its effective parameters - the elasticity modulus, tension and compression strengths, and heat-conductivity coefficient with minimized composite density. The effective characteristics of a composite were computed by finite-element solving the auxiliary local problems of elasticity and heat-conductivity theories appearing when the asymptotic averaging method is applied.The algorithm suggested to solve the optimization problem includes the following main stages:1 finding a set of solutions for direct problem to calculate the effective characteristics;2 constructing the curves of effective characteristics versus filler concentrations by means of approximating functions, which are offered for use as a thin plate spline with smoothing;3 constructing a set of points to satisfy restrictions and a boundary of the point set to satisfy restrictions obtaining, as a result, a contour which can be parameterized;4 defining a global density minimum over the contour through psi-transformation.A numerical example of solving the optimization problem was given for a dispersereinforced composite with two types of fillers being hollow microspheres: glass and phenolic. It was shown that the suggested algorithm allows us to find optimal filler concentrations efficiently enough.

  19. A spectral/B-spline method for the Navier-Stokes equations in unbounded domains

    CERN Document Server

    Dufresne, L

    2003-01-01

    The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the theta and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r-> infinity. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the lo...

  20. N-dimensional non uniform rational B-splines for metamodeling

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Cameron J [Los Alamos National Laboratory; Crawford, Richard H [UT - AUSTIN

    2008-01-01

    Non Uniform Rational B-splines (NURBs) have unique properties that make them attractive for engineering metamodeling applications. NURBs are known to accurately model many different continuous curve and surface topologies in 1-and 2-variate spaces. However, engineering metamodels of the design space often require hypervariate representations of multidimensional outputs. In essence, design space metamodels are hyperdimensional constructs with a dimensionality determined by their input and output variables. To use NURBs as the basis for a metamodel in a hyperdimensional space, traditional geometric fitting techniques must be adapted to hypervariate and hyperdimensional spaces composed of both continuous and discontinuous variable types. In this paper, we describe the necessary adaptations for the development of a NURBs-based metamodel called a Hyperdimensional Performance Model or HyPerModel. HyPerModels are capable of accurately and reliably modeling nonlinear hyperdimensional objects defined by both continuous and discontinuous variables of a wide variety of topologies, such as those that define typical engineering design spaces. We demonstrate this ability by successfully generating accurate HyPerModels of 10 trial functions laying the foundation for future work with N-dimensional NURBs in design space applications.

  1. Average course approximation of measured subsidence and inclinations of mining area by smooth splines

    Directory of Open Access Journals (Sweden)

    Justyna Orwat

    2017-01-01

    Full Text Available The results of marking average courses of subsidence measured on the points of measuring line no. 1 of the “Budryk” Hard Coal Mine, set approximately perpendicularly to a face run of four consecutively mined longwalls in coal bed 338/2 have been presented in the article. Smooth splines were used to approximate the average course of measured subsidence after subsequent exploitation stages. The minimising of the sum of the squared differences between the average and forecasted subsidence, using J. Bialek's formula, was used as a selection criterion of parameter values of smoothing an approximating function. The parameter values of this formula have been chosen in order to match forecasted subsidence with measured ones. The average values of inclinations have been calculated on the basis of approximated values of observed subsidence. It has been shown that by doing this the average values of extreme measured inclinations can be obtained in almost the same way as extreme observed inclinations. It is not necessary to divide the whole profile of a subsidence basin into parts. The obtained values of variability coefficients of a random scattering for subsidence and inclinations are smaller than their values which occur in the literature.

  2. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  3. An evaluation of prefiltered B-spline reconstruction for quasi-interpolation on the Body-Centered Cubic lattice.

    Science.gov (United States)

    Csébfalvi, Balázs

    2010-01-01

    In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively.

  4. Practical box splines for reconstruction on the body centered cubic lattice.

    Science.gov (United States)

    Entezari, Alireza; Van De Ville, Dimitri; Möeller, Torsten

    2008-01-01

    We introduce a family of box splines for efficient, accurate and smooth reconstruction of volumetric data sampled on the Body Centered Cubic (BCC) lattice, which is the favorable volumetric sampling pattern due to its optimal spectral sphere packing property. First, we construct a box spline based on the four principal directions of the BCC lattice that allows for a linear C(0) reconstruction. Then, the design is extended for higher degrees of continuity. We derive the explicit piecewise polynomial representation of the C(0) and C(2) box splines that are useful for practical reconstruction applications. We further demonstrate that approximation in the shift-invariant space---generated by BCC-lattice shifts of these box splines---is {twice} as efficient as using the tensor-product B-spline solutions on the Cartesian lattice (with comparable smoothness and approximation order, and with the same sampling density). Practical evidence is provided demonstrating that not only the BCC lattice is generally a more accurate sampling pattern, but also allows for extremely efficient reconstructions that outperform tensor-product Cartesian reconstructions.

  5. LD-Spline: Mapping SNPs on genotyping platforms to genomic regions using patterns of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Bush William S

    2009-12-01

    Full Text Available Abstract Background Gene-centric analysis tools for genome-wide association study data are being developed both to annotate single locus statistics and to prioritize or group single nucleotide polymorphisms (SNPs prior to analysis. These approaches require knowledge about the relationships between SNPs on a genotyping platform and genes in the human genome. SNPs in the genome can represent broader genomic regions via linkage disequilibrium (LD, and population-specific patterns of LD can be exploited to generate a data-driven map of SNPs to genes. Methods In this study, we implemented LD-Spline, a database routine that defines the genomic boundaries a particular SNP represents using linkage disequilibrium statistics from the International HapMap Project. We compared the LD-Spline haplotype block partitioning approach to that of the four gamete rule and the Gabriel et al. approach using simulated data; in addition, we processed two commonly used genome-wide association study platforms. Results We illustrate that LD-Spline performs comparably to the four-gamete rule and the Gabriel et al. approach; however as a SNP-centric approach LD-Spline has the added benefit of systematically identifying a genomic boundary for each SNP, where the global block partitioning approaches may falter due to sampling variation in LD statistics. Conclusion LD-Spline is an integrated database routine that quickly and effectively defines the genomic region marked by a SNP using linkage disequilibrium, with a SNP-centric block definition algorithm.

  6. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  7. Nonparametric modal regression

    OpenAIRE

    Chen, Yen-Chi; Genovese, Christopher R.; Tibshirani, Ryan J.; Wasserman, Larry

    2016-01-01

    Modal regression estimates the local modes of the distribution of $Y$ given $X=x$, instead of the mean, as in the usual regression sense, and can hence reveal important structure missed by usual regression methods. We study a simple nonparametric method for modal regression, based on a kernel density estimate (KDE) of the joint distribution of $Y$ and $X$. We derive asymptotic error bounds for this method, and propose techniques for constructing confidence sets and prediction sets. The latter...

  8. Flexible survival regression modelling

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben

    2009-01-01

    Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...

  9. Speckle tracking echocardiography derived 2-dimensional myocardial strain predicts left ventricular function and mass regression in aortic stenosis patients undergoing aortic valve replacement.

    Science.gov (United States)

    Staron, Adam; Bansal, Manish; Kalakoti, Piyush; Nakabo, Ayumi; Gasior, Zbigniew; Pysz, Piotr; Wita, Krystian; Jasinski, Marek; Sengupta, Partho P

    2013-04-01

    Regression of left ventricular (LV) mass in severe aortic stenosis (AS) following aortic valve replacement (AVR) reduces the potential risk of sudden death and congestive heart failure associated with LV hypertrophy. We investigated whether abnormalities of resting LV deformation in severe AS can predict the lack of regression of LV mass following AVR. Two-dimensional speckle tracking echocardiography (STE) was performed in a total of 100 subjects including 60 consecutive patients with severe AS having normal LV ejection fraction (EF > 50 %) and 40 controls. STE was performed preoperatively and at 4 months following AVR, including longitudinal strain assessed from the apical 4-chamber and 2-chamber views and the circumferential and rotational mechanics measured from the apical short axis view. In comparison with controls, the patients with AS showed a significantly lower LV longitudinal (p regression (>10 %) following AVR. In conclusion, STE can quantify the burden of myocardial dysfunction in patients with severe AS despite the presence of normal LV ejection fraction. Furthermore, resting abnormalities in circumferential strain at LV apex is related with a hemodynamic milieu associated with the lack of LV mass regression during short-term follow up after AVR.

  10. BSR: B-spline atomic R-matrix codes

    Science.gov (United States)

    Zatsarinny, Oleg

    2006-02-01

    BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput

  11. Bayesian semiparametric regression in the presence of conditionally heteroscedastic measurement and regression errors.

    Science.gov (United States)

    Sarkar, Abhra; Mallick, Bani K; Carroll, Raymond J

    2014-12-01

    We consider the problem of robust estimation of the regression relationship between a response and a covariate based on sample in which precise measurements on the covariate are not available but error-prone surrogates for the unobserved covariate are available for each sampled unit. Existing methods often make restrictive and unrealistic assumptions about the density of the covariate and the densities of the regression and the measurement errors, for example, normality and, for the latter two, also homoscedasticity and thus independence from the covariate. In this article we describe Bayesian semiparametric methodology based on mixtures of B-splines and mixtures induced by Dirichlet processes that relaxes these restrictive assumptions. In particular, our models for the aforementioned densities adapt to asymmetry, heavy tails and multimodality. The models for the densities of regression and measurement errors also accommodate conditional heteroscedasticity. In simulation experiments, our method vastly outperforms existing methods. We apply our method to data from nutritional epidemiology. © 2014, The International Biometric Society.

  12. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    Science.gov (United States)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  13. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  14. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  15. Natural spline interpolation and exponential parameterization for length estimation of curves

    Science.gov (United States)

    Kozera, R.; Wilkołazka, M.

    2017-07-01

    This paper tackles the problem of estimating a length of a regular parameterized curve γ from an ordered sample of interpolation points in arbitrary Euclidean space by a natural spline. The corresponding tabular parameters are not given and are approximated by the so-called exponential parameterization (depending on λ ∈ [0, 1]). The respective convergence orders α(λ) for estimating length of γ are established for curves sampled more-or-less uniformly. The numerical experiments confirm a slow convergence orders α(λ) = 2 for all λ ∈ [0, 1) and a cubic order α(1) = 3 once natural spline is used.

  16. Spatial and temporal interpolation of satellite-based aerosol optical depth measurements over North America using B-splines

    Science.gov (United States)

    Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick

    2005-08-01

    Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.

  17. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    Science.gov (United States)

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  18. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

    Science.gov (United States)

    Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

    1976-01-01

    A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

  19. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  20. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  1. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  2. Dimension Reduction Regression in R

    Directory of Open Access Journals (Sweden)

    Sanford Weisberg

    2002-01-01

    Full Text Available Regression is the study of the dependence of a response variable y on a collection predictors p collected in x. In dimension reduction regression, we seek to find a few linear combinations β1x,...,βdx, such that all the information about the regression is contained in these linear combinations. If d is very small, perhaps one or two, then the regression problem can be summarized using simple graphics; for example, for d=1, the plot of y versus β1x contains all the regression information. When d=2, a 3D plot contains all the information. Several methods for estimating d and relevant functions of β1,..., βdhave been suggested in the literature. In this paper, we describe an R package for three important dimension reduction methods: sliced inverse regression or sir, sliced average variance estimates, or save, and principal Hessian directions, or phd. The package is very general and flexible, and can be easily extended to include other methods of dimension reduction. It includes tests and estimates of the dimension , estimates of the relevant information including β1,..., βd, and some useful graphical summaries as well.

  3. Cubic smoothing splines background correction in on-line liquid chromatography-Fourier transform infrared spectrometry.

    Science.gov (United States)

    Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel

    2010-10-22

    A background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry (LC-FTIR) is proposed. The developed approach applies univariate background correction to each variable (i.e. each wave number) individually. Spectra measured in the region before and after each peak cluster are used as knots to model the variation of the eluent absorption intensity with time using cubic smoothing splines (CSS) functions. The new approach has been successfully tested on simulated as well as on real data sets obtained from injections of standard mixtures of polyethylene glycols with four different molecular weights in methanol:water, 2-propanol:water and ethanol:water gradients ranging from 30 to 90, 10 to 25 and from 10 to 40% (v/v) of organic modifier, respectively. Calibration lines showed high linearity with coefficients of determination higher than 0.98 and limits of detection between 0.4 and 1.4, 0.9 and 1.8, and 1.1 and 2.7 mgmL⁻¹ in methanol:water, 2-propanol:water and ethanol:water, respectively. Furthermore the method performance has been compared with a univariate background correction approach based on the use of a reference spectra matrix (UBC-RSM) to discuss the potential as well as pitfalls and drawbacks of the proposed approach. This method works without previous variable selection and provides minimal user-interaction, thus increasing drastically the feasibility of on-line coupling of gradient LC-FTIR. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  5. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    Science.gov (United States)

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  6. A modified linear algebraic approach to electron scattering using cubic splines

    International Nuclear Information System (INIS)

    Kinney, R.A.

    1986-01-01

    A modified linear algebraic approach to the solution of the Schrodiner equation for low-energy electron scattering is presented. The method uses a piecewise cubic-spline approximation of the wavefunction. Results in the static-potential and the static-exchange approximations for e - +H s-wave scattering are compared with unmodified linear algebraic and variational linear algebraic methods. (author)

  7. Quadratic vs cubic spline-wavelets for image representations and compression

    NARCIS (Netherlands)

    P.C. Marais; E.H. Blake; A.A.M. Kuijk (Fons)

    1997-01-01

    textabstractThe Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary transform data. The use of semi-orthogonal quadratic spline wavelets

  8. Quadratic vs cubic spline-wavelets for image representation and compression

    NARCIS (Netherlands)

    P.C. Marais; E.H. Blake; A.A.M. Kuijk (Fons)

    1994-01-01

    htmlabstractThe Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary ransform data. The use of semi-orthogonal quadratic spline wavelets

  9. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  10. Fractional and complex pseudo-splines and the construction of Parseval frames

    DEFF Research Database (Denmark)

    Massopust, Peter; Forster, Brigitte; Christensen, Ole

    2017-01-01

    in complex transform techniques for signal and image analyses. We also show that in analogue to the integer case, the generalized pseudo-splines lead to constructions of Parseval wavelet frames via the unitary extension principle. The regularity and approximation order of this new class of generalized...

  11. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    Science.gov (United States)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  12. B-Spline Approximations of the Gaussian, their Gabor Frame Properties, and Approximately Dual Frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2017-01-01

    of a very simple form, leading to “almost perfect reconstruction� within any desired error tolerance whenever the product ab is sufficiently small. In contrast, the known (exact) dual windows have a very complicated form. A similar analysis is sketched with the scaled B-splines replaced by certain...

  13. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  14. Validating the Multidimensional Spline Based Global Aerodynamic Model for the Cessna Citation II

    NARCIS (Netherlands)

    De Visser, C.C.; Mulder, J.A.

    2011-01-01

    The validation of aerodynamic models created using flight test data is a time consuming and often costly process. In this paper a new method for the validation of global nonlinear aerodynamic models based on multivariate simplex splines is presented. This new method uses the unique properties of the

  15. Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities

    Directory of Open Access Journals (Sweden)

    Rakowski Waldemar

    2015-12-01

    Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.

  16. Impact of Missing Data on the Detection of Differential Item Functioning: The Case of Mantel-Haenszel and Logistic Regression Analysis

    Science.gov (United States)

    Robitzsch, Alexander; Rupp, Andre A.

    2009-01-01

    This article describes the results of a simulation study to investigate the impact of missing data on the detection of differential item functioning (DIF). Specifically, it investigates how four methods for dealing with missing data (listwise deletion, zero imputation, two-way imputation, response function imputation) interact with two methods of…

  17. Ridge regression revisited

    NARCIS (Netherlands)

    P.M.C. de Boer (Paul); C.M. Hafner (Christian)

    2005-01-01

    textabstractWe argue in this paper that general ridge (GR) regression implies no major complication compared with simple ridge regression. We introduce a generalization of an explicit GR estimator derived by Hemmerle and by Teekens and de Boer and show that this estimator, which is more

  18. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz

  19. Differential expression and function of breast regression protein 39 (BRP-39) in murine models of subacute cigarette smoke exposure and allergic airway inflammation

    OpenAIRE

    Coyle Anthony J; Jordana Manel; Bauer Carla MT; Botelho Fernando M; Nikota Jake K; Humbles Alison A; Stampfli Martin R

    2011-01-01

    Abstract Background While the presence of the chitinase-like molecule YKL40 has been reported in COPD and asthma, its relevance to inflammatory processes elicited by cigarette smoke and common environmental allergens, such as house dust mite (HDM), is not well understood. The objective of the current study was to assess expression and function of BRP-39, the murine equivalent of YKL40 in a murine model of cigarette smoke-induced inflammation and contrast expression and function to a model of ...

  20. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  1. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  2. Multiple Linear Regression

    Science.gov (United States)

    Grégoire, G.

    2014-12-01

    This chapter deals with the multiple linear regression. That is we investigate the situation where the mean of a variable depends linearly on a set of covariables. The noise is supposed to be gaussian. We develop the least squared method to get the parameter estimators and estimates of their precisions. This leads to design confidence intervals, prediction intervals, global tests, individual tests and more generally tests of submodels defined by linear constraints. Methods for model's choice and variables selection, measures of the quality of the fit, residuals study, diagnostic methods are presented. Finally identification of departures from the model's assumptions and the way to deal with these problems are addressed. A real data set is used to illustrate the methodology with software R. Note that this chapter is intended to serve as a guide for other regression methods, like logistic regression or AFT models and Cox regression.

  3. Glyph: Symbolic Regression Tools

    OpenAIRE

    Quade, Markus; Gout, Julien; Abel, Markus

    2018-01-01

    We present Glyph - a Python package for genetic programming based symbolic regression. Glyph is designed for usage let by numerical simulations let by real world experiments. For experimentalists, glyph-remote provides a separation of tasks: a ZeroMQ interface splits the genetic programming optimization task from the evaluation of an experimental (or numerical) run. Glyph can be accessed at http://github.com/ambrosys/glyph . Domain experts are be able to employ symbolic regression in their ex...

  4. Pansharpening via sparse regression

    Science.gov (United States)

    Tang, Songze; Xiao, Liang; Liu, Pengfei; Huang, Lili; Zhou, Nan; Xu, Yang

    2017-09-01

    Pansharpening is an effective way to enhance the spatial resolution of a multispectral (MS) image by fusing it with a provided panchromatic image. Instead of restricting the coding coefficients of low-resolution (LR) and high-resolution (HR) images to be equal, we propose a pansharpening approach via sparse regression in which the relationship between sparse coefficients of HR and LR MS images is modeled by ridge regression and elastic-net regression simultaneously learning the corresponding dictionaries. The compact dictionaries are learned based on the sampled patch pairs from the high- and low-resolution images, which can greatly characterize the structural information of the LR MS and HR MS images. Later, taking the complex relationship between the coding coefficients of LR MS and HR MS images into account, the ridge regression is used to characterize the relationship of intrapatches. The elastic-net regression is employed to describe the relationship of interpatches. Thus, the HR MS image can be almost identically reconstructed by multiplying the HR dictionary and the calculated sparse coefficient vector with the learned regression relationship. The simulated and real experimental results illustrate that the proposed method outperforms several well-known methods, both quantitatively and perceptually.

  5. A comparison of discriminant logistic regression and Item Response Theory Likelihood-Ratio Tests for Differential Item Functioning (IRTLRDIF) in polytomous short tests.

    Science.gov (United States)

    Hidalgo, María D; López-Martínez, María D; Gómez-Benito, Juana; Guilera, Georgina

    2016-01-01

    Short scales are typically used in the social, behavioural and health sciences. This is relevant since test length can influence whether items showing DIF are correctly flagged. This paper compares the relative effectiveness of discriminant logistic regression (DLR) and IRTLRDIF for detecting DIF in polytomous short tests. A simulation study was designed. Test length, sample size, DIF amount and item response categories number were manipulated. Type I error and power were evaluated. IRTLRDIF and DLR yielded Type I error rates close to nominal level in no-DIF conditions. Under DIF conditions, Type I error rates were affected by test length DIF amount, degree of test contamination, sample size and number of item response categories. DLR showed a higher Type I error rate than did IRTLRDIF. Power rates were affected by DIF amount and sample size, but not by test length. DLR achieved higher power rates than did IRTLRDIF in very short tests, although the high Type I error rate involved means that this result cannot be taken into account. Test length had an important impact on the Type I error rate. IRTLRDIF and DLR showed a low power rate in short tests and with small sample sizes.

  6. Isotonic Regression under Lipschitz Constraint

    Science.gov (United States)

    Wilbur, W.J.

    2018-01-01

    The pool adjacent violators (PAV) algorithm is an efficient technique for the class of isotonic regression problems with complete ordering. The algorithm yields a stepwise isotonic estimate which approximates the function and assigns maximum likelihood to the data. However, if one has reasons to believe that the data were generated by a continuous function, a smoother estimate may provide a better approximation to that function. In this paper, we consider the formulation which assumes that the data were generated by a continuous monotonic function obeying the Lipschitz condition. We propose a new algorithm, the Lipschitz pool adjacent violators (LPAV) algorithm, which approximates that function; we prove the convergence of the algorithm and examine its complexity. PMID:29456266

  7. The new age of sudomotor function testing: a sensitive and specific biomarker for diagnosis, estimation of severity, monitoring progression and regression in response to intervention

    Directory of Open Access Journals (Sweden)

    Aaron eVinik

    2015-06-01

    Full Text Available Sudorimetry technology has evolved dramatically, as a rapid, non-invasive, robust, and accurate biomarker for small fibers that can easily be integrated into clinical practice. Though skin biopsy with quantitation of intraepidermal nerve fiber density (IENFD is still currently recognized as the gold standard in the evaluation, sudorimetry may yield diagnostic information not only on autonomic dysfunction, but also enhance the assessment of the small somatosensory nerves, disease detection, progression, and response to therapy. Sudoscan is based on different electrochemical principles (reverse iontophoresis and chronoamperometry to measure sudomotor function than prior technologies, affording it a much more practical and precise performance profile for routine clinical use with potential as a research tool. Small nerve fiber dysfunction has been found to occur early in metabolic syndrome and diabetes and may also be the only neurological manifestation in small fiber neuropathies, beneath the detection limits of traditional nerve function tests. Test results are robust, accomplished within minutes, require little technical training, no calculations since established norms have been provided for the effects of age, gender and ethnicity. Sudomotor testing has been greatly under-utilized in the past, restricted to specialist centers capable of handling the technically demanding, invasive biopsies for quantitation of IENF and expensive technology. Yet evaluation of autonomic and somatic nerve function has been shown to be the single best estimate of cardiovascular risk. Evaluation of sweating has the appeal of quantifiable non–invasive determination of the integrity of the peripheral autonomic nervous system, and can now be accomplished with the Sudoscan™ tool rapidly at point of care clinics, allowing intervention for morbid complications prior to permanent structural nerve damage. We review here sudomotor function testing technology; the

  8. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy

  9. Impact of renal function and demographic/anthropomorphic variables on peak thyrotropin after recombinant human thyrotropin stimulation: a stepwise forward multiple-regression analysis.

    Science.gov (United States)

    Hautzel, Hubertus; Pisar, Elisabeth; Lindner, David; Schott, Matthias; Grandt, Rüdiger; Müller, Hans-Wilhelm

    2013-06-01

    When applying the recommended standard doses of recombinant human thyrotropin (rhTSH) in the diagnostic/therapeutic management of patients with differentiated thyroid cancer (DTC), the resulting peak TSH levels vary extensively. Previous studies applying multivariate statistics identified patient-inherent variables influencing the rhTSH/peak TSH relation. However, those results were inconclusive and partly conflicting. Notably, no independent role of renal function was substantiated, despite the fact that the kidneys are known to play a prominent role in TSH clearance from blood. Therefore, the study's aim was to investigate the impact of renal function on the peak TSH concentration after the standard administration of rhTSH used in the management of thyroid cancer. The second objective was to calculate a ranking regarding the effect sizes of the selected variables on the peak TSH. There were 286 patients with DTC included in the study. Univariate and multivariate analyses were performed, testing the correlation of serum creatinine and glomerular filtration rate (GFR) as surrogate parameters of renal function, age, sex, weight, height, and body surface area (BSA) with the peak TSH level. In six additional patients, the subsequent TSH pharmacokinetics after the TSH peak were measured and qualitatively compared. By univariate analyses, TSH correlated negatively with BSA, GFR, weight, and height, and positively with age, female sex, and serum creatinine (prenal function as the two most influential independent variables, followed by age, sex, and height. The pharmacokinetic datasets indicated that these identified parameters also influence the TSH decline over time. Identifying those patients with a favorable combination of parameters predicting a high-peak TSH is the first step toward an individualization of rhTSH dosing. Additionally, the subsequent TSH decrease over time needs to be taken into account. A complete understanding of the interrelation of the identified

  10. The use of Bayesian nonlinear regression techniques for the modelling of the retention behaviour of volatile components of Artemisia species.

    Science.gov (United States)

    Jalali-Heravi, M; Mani-Varnosfaderani, A; Taherinia, D; Mahmoodi, M M

    2012-07-01

    The main aim of this work was to assess the ability of Bayesian multivariate adaptive regression splines (BMARS) and Bayesian radial basis function (BRBF) techniques for modelling the gas chromatographic retention indices of volatile components of Artemisia species. A diverse set of molecular descriptors was calculated and used as descriptor pool for modelling the retention indices. The ability of BMARS and BRBF techniques was explored for the selection of the most relevant descriptors and proper basis functions for modelling. The results revealed that BRBF technique is more reproducible than BMARS for modelling the retention indices and can be used as a method for variable selection and modelling in quantitative structure-property relationship (QSPR) studies. It is also concluded that the Markov chain Monte Carlo (MCMC) search engine, implemented in BRBF algorithm, is a suitable method for selecting the most important features from a vast number of them. The values of correlation between the calculated retention indices and the experimental ones for the training and prediction sets (0.935 and 0.902, respectively) revealed the prediction power of the BRBF model in estimating the retention index of volatile components of Artemisia species.

  11. Time-varying Markov regression random-effect model with Bayesian estimation procedures: Application to dynamics of functional recovery in patients with stroke.

    Science.gov (United States)

    Pan, Shin-Liang; Chen, Hsiu-Hsi

    2010-09-01

    The rates of functional recovery after stroke tend to decrease with time. Time-varying Markov processes (TVMP) may be more biologically plausible than time-invariant Markov process for modeling such data. However, analysis of such stochastic processes, particularly tackling reversible transitions and the incorporation of random effects into models, can be analytically intractable. We make use of ordinary differential equations to solve continuous-time TVMP with reversible transitions. The proportional hazard form was used to assess the effects of an individual's covariates on multi-state transitions with the incorporation of random effects that capture the residual variation after being explained by measured covariates under the concept of generalized linear model. We further built up Bayesian directed acyclic graphic model to obtain full joint posterior distribution. Markov chain Monte Carlo (MCMC) with Gibbs sampling was applied to estimate parameters based on posterior marginal distributions with multiple integrands. The proposed method was illustrated with empirical data from a study on the functional recovery after stroke. Copyright 2010 Elsevier Inc. All rights reserved.

  12. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  13. Practical Session: Logistic Regression

    Science.gov (United States)

    Clausel, M.; Grégoire, G.

    2014-12-01

    An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

  14. Differential expression and function of breast regression protein 39 (BRP-39 in murine models of subacute cigarette smoke exposure and allergic airway inflammation

    Directory of Open Access Journals (Sweden)

    Coyle Anthony J

    2011-04-01

    Full Text Available Abstract Background While the presence of the chitinase-like molecule YKL40 has been reported in COPD and asthma, its relevance to inflammatory processes elicited by cigarette smoke and common environmental allergens, such as house dust mite (HDM, is not well understood. The objective of the current study was to assess expression and function of BRP-39, the murine equivalent of YKL40 in a murine model of cigarette smoke-induced inflammation and contrast expression and function to a model of HDM-induced allergic airway inflammation. Methods CD1, C57BL/6, and BALB/c mice were room air- or cigarette smoke-exposed for 4 days in a whole-body exposure system. In separate experiments, BALB/c mice were challenged with HDM extract once a day for 10 days. BRP-39 was assessed by ELISA and immunohistochemistry. IL-13, IL-1R1, IL-18, and BRP-39 knock out (KO mice were utilized to assess the mechanism and relevance of BRP-39 in cigarette smoke- and HDM-induced airway inflammation. Results Cigarette smoke exposure elicited a robust induction of BRP-39 but not the catalytically active chitinase, AMCase, in lung epithelial cells and alveolar macrophages of all mouse strains tested. Both BRP-39 and AMCase were increased in lung tissue after HDM exposure. Examining smoke-exposed IL-1R1, IL-18, and IL-13 deficient mice, BRP-39 induction was found to be IL-1 and not IL-18 or IL-13 dependent, while induction of BRP-39 by HDM was independent of IL-1 and IL-13. Despite the importance of BRP-39 in cellular inflammation in HDM-induced airway inflammation, BRP-39 was found to be redundant for cigarette smoke-induced airway inflammation and the adjuvant properties of cigarette smoke. Conclusions These data highlight the contrast between the importance of BRP-39 in HDM- and cigarette smoke-induced inflammation. While functionally important in HDM-induced inflammation, BRP-39 is a biomarker of cigarette smoke induced inflammation which is the byproduct of an IL-1

  15. A Novel Method for Gearbox Fault Detection Based on Biorthogonal B-spline Wavelet

    Directory of Open Access Journals (Sweden)

    Guangbin ZHANG

    2011-10-01

    Full Text Available Localized defects of gearbox tend to result in periodic impulses in the vibration signal, which contain important information for system dynamics analysis. So parameter identification of impulse provides an effective approach for gearbox fault diagnosis. Biorthogonal B-spline wavelet has the properties of compact support, high vanishing moment and symmetry, which are suitable to signal de-noising, fast calculation, and reconstruction. Thus, a novel time frequency distribution method is present for gear fault diagnosis by biorthogonal B-spline wavelet. Simulation study concerning singularity signal shows that this wavelet is effective in identifying the fault feature with coefficients map and coefficients line. Furthermore, an integrated approach consisting of wavelet decomposition, Hilbert transform and power spectrum density is used in applications. The results indicate that this method can extract the gearbox fault characteristics and diagnose the fault patterns effectively.

  16. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    Science.gov (United States)

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  17. High Accuracy Spline Explicit Group (SEG Approximation for Two Dimensional Elliptic Boundary Value Problems.

    Directory of Open Access Journals (Sweden)

    Joan Goh

    Full Text Available Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method.

  18. Selected Aspects of Wear Affecting Keyed Joints and Spline Connections During Operation of Aircrafts

    Directory of Open Access Journals (Sweden)

    Gębura Andrzej

    2014-12-01

    Full Text Available The paper deals with selected deficiencies of spline connections, such as angular or parallel misalignment (eccentricity and excessive play. It is emphasized how important these deficiencies are for smooth operation of the entire driving units. The aim of the study is to provide a kind of a reference list with such deficiencies with visual symptoms of wear, specification of mechanical measurements for mating surfaces, mathematical description of waveforms for dynamic variability of motion in such connections and visualizations of the connection behaviour acquired with the use of the FAM-C and FDM-A. Attention is paid to hazards to flight safety when excessively worn spline connections are operated for long periods of time

  19. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  20. Clustering Time-Series Gene Expression Data Using Smoothing Spline Derivatives

    Directory of Open Access Journals (Sweden)

    Martin PGP

    2007-01-01

    Full Text Available Microarray data acquired during time-course experiments allow the temporal variations in gene expression to be monitored. An original postprandial fasting experiment was conducted in the mouse and the expression of 200 genes was monitored with a dedicated macroarray at 11 time points between 0 and 72 hours of fasting. The aim of this study was to provide a relevant clustering of gene expression temporal profiles. This was achieved by focusing on the shapes of the curves rather than on the absolute level of expression. Actually, we combined spline smoothing and first derivative computation with hierarchical and partitioning clustering. A heuristic approach was proposed to tune the spline smoothing parameter using both statistical and biological considerations. Clusters are illustrated a posteriori through principal component analysis and heatmap visualization. Most results were found to be in agreement with the literature on the effects of fasting on the mouse liver and provide promising directions for future biological investigations.

  1. Clustering Time-Series Gene Expression Data Using Smoothing Spline Derivatives

    Directory of Open Access Journals (Sweden)

    S. Déjean

    2007-06-01

    Full Text Available Microarray data acquired during time-course experiments allow the temporal variations in gene expression to be monitored. An original postprandial fasting experiment was conducted in the mouse and the expression of 200 genes was monitored with a dedicated macroarray at 11 time points between 0 and 72 hours of fasting. The aim of this study was to provide a relevant clustering of gene expression temporal profiles. This was achieved by focusing on the shapes of the curves rather than on the absolute level of expression. Actually, we combined spline smoothing and first derivative computation with hierarchical and partitioning clustering. A heuristic approach was proposed to tune the spline smoothing parameter using both statistical and biological considerations. Clusters are illustrated a posteriori through principal component analysis and heatmap visualization. Most results were found to be in agreement with the literature on the effects of fasting on the mouse liver and provide promising directions for future biological investigations.

  2. Analysis of crustal structure of Venus utilizing residual Line-of-Sight (LOS) gravity acceleration and surface topography data. A trial of global modeling of Venus gravity field using harmonic spline method

    Science.gov (United States)

    Fang, Ming; Bowin, Carl

    1992-01-01

    To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.

  3. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  4. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  5. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  6. Cyclic reduction and FACR methods for piecewise hermite bicubic orthogonal spline collocation

    Science.gov (United States)

    Bialecki, Bernard

    1994-09-01

    Cyclic reduction and Fourier analysis-cyclic reduction (FACR) methods are presented for the solution of the linear systems which arise when orthogonal spline collocation with piecewise Hermite bicubics is applied to boundary value problems for certain separable partial differential equations on a rectangle. On anN×N uniform partition, the cyclic reduction and Fourier analysis-cyclic reduction methods requireO(N2log2N) andO(N2log2log2N) arithmetic operations, respectively.

  7. Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence

    KAUST Repository

    Ait-Haddou, Rachid

    2015-06-19

    We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention of any numerical solver and the rule is optimal, that is, it requires minimal number of nodes. Numerical experiments validating the theoretical results and the error estimates of the quadrature rules are also presented.

  8. A fourth order spline collocation approach for a business cycle model

    Science.gov (United States)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  9. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    Science.gov (United States)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  10. Discrete quintic spline for boundary value problem in plate deflation theory

    Science.gov (United States)

    Wong, Patricia J. Y.

    2017-07-01

    We propose a numerical scheme for a fourth-order boundary value problem arising from plate deflation theory. The scheme involves a discrete quintic spline, and it is of order 4 if a parameter takes a specific value, else it is of order 2. We also present a well known numerical example to illustrate the efficiency of our method as well as to compare with other numerical methods proposed in the literature.

  11. Nonlinear Multivariate Spline-Based Control Allocation for High-Performance Aircraft

    OpenAIRE

    Tol, H.J.; De Visser, C.C.; Van Kampen, E.; Chu, Q.P.

    2014-01-01

    High performance flight control systems based on the nonlinear dynamic inversion (NDI) principle require highly accurate models of aircraft aerodynamics. In general, the accuracy of the internal model determines to what degree the system nonlinearities can be canceled; the more accurate the model, the better the cancellation, and with that, the higher the performance of the controller. In this paper a new control system is presented that combines NDI with multivariate simplex spline based con...

  12. A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid

    Science.gov (United States)

    Sulaimanov, Z. M.; Shumilov, B. M.

    2017-10-01

    For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.

  13. Parametric and Nonparametric Empirical Regression Models: Case Study of Copper Bromide Laser Generation

    Directory of Open Access Journals (Sweden)

    S. G. Gocheva-Ilieva

    2010-01-01

    Full Text Available In order to model the output laser power of a copper bromide laser with wavelengths of 510.6 and 578.2 nm we have applied two regression techniques—multiple linear regression and multivariate adaptive regression splines. The models have been constructed on the basis of PCA factors for historical data. The influence of first- and second-order interactions between predictors has been taken into account. The models are easily interpreted and have good prediction power, which is established from the results of their validation. The comparison of the derived models shows that these based on multivariate adaptive regression splines have an advantage over the others. The obtained results allow for the clarification of relationships between laser generation and the observed laser input variables, for better determining their influence on laser generation, in order to improve the experimental setup and laser production technology. They can be useful for evaluation of known experiments as well as for prediction of future experiments. The developed modeling methodology is also applicable for a wide range of similar laser devices—metal vapor lasers and gas lasers.

  14. On developing B-spline registration algorithms for multi-core processors.

    Science.gov (United States)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-11-07

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  15. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  16. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    This thesis is the first comprehensive research work conducted on the Beirut based TV station, an important representative of the post-2011 generation of Arab satellite news media. The launch of al-Mayadeen in June 2012 was closely linked to the political developments across the Arab world...... members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...

  17. Multispectral colormapping using penalized least square regression

    DEFF Research Database (Denmark)

    Dissing, Bjørn Skovlund; Carstensen, Jens Michael; Larsen, Rasmus

    2010-01-01

    -XYZ color matching functions. The target of the regression is a well known color chart, and the models are validated using leave one out cross validation in order to maintain best possible generalization ability. The authors compare the method with a direct linear regression and see...

  18. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  19. On the equivalence of spherical splines with least-squares collocation and Stokes's formula for regional geoid computation

    Science.gov (United States)

    Ophaug, Vegard; Gerlach, Christian

    2017-11-01

    This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.

  20. A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure.

    Science.gov (United States)

    Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C

    2014-12-01

    It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.

  1. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  2. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  3. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  4. Regression in organizational leadership.

    Science.gov (United States)

    Kernberg, O F

    1979-02-01

    The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

  5. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  6. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  7. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    . There are, however, decreasing returns to aid, and the estimated effectiveness of aid is highly sensitive to the choice of estimator and the set of control variables. When investment and human capital are controlled for, no positive effect of aid is found. Yet, aid continues to impact on growth via...... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes....

  8. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  9. Ridge Regression: A Regression Procedure for Analyzing correlated Independent Variables

    Science.gov (United States)

    Rakow, Ernest A.

    1978-01-01

    Ridge regression is a technique used to ameliorate the problem of highly correlated independent variables in multiple regression analysis. This paper explains the fundamentals of ridge regression and illustrates its use. (JKS)

  10. Link functions and Matérn kernel in the estimation of reflectance spectra from RGB responses.

    Science.gov (United States)

    Heikkinen, Ville; Mirhashemi, Arash; Alho, Juha

    2013-11-01

    We evaluate three link functions (square root, logit, and copula) and Matérn kernel in the kernel-based estimation of reflectance spectra of the Munsell Matte collection in the 400-700 nm region. We estimate reflectance spectra from RGB camera responses in case of real and simulated responses and show that a combination of link function and a kernel regression model with a Matérn kernel decreases spectral errors when compared to a Gaussian mixture model or kernel regression with the Gaussian kernel. Matérn kernel produces performance similar to the thin plate spline model, but does not require a parametric polynomial part in the model.

  11. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  12. Correlation studies for B-spline modeled F2 Chapman parameters obtained from FORMOSAT-3/COSMIC data

    Directory of Open Access Journals (Sweden)

    M. Limberger

    2014-12-01

    Full Text Available The determination of ionospheric key quantities such as the maximum electron density of the F2 layer NmF2, the corresponding F2 peak height hmF2 and the F2 scale height HF2 are of high relevance in 4-D ionosphere modeling to provide information on the vertical structure of the electron density (Ne. The Ne distribution with respect to height can, for instance, be modeled by the commonly accepted F2 Chapman layer. An adequate and observation driven description of the vertical Ne variation can be obtained from electron density profiles (EDPs derived by ionospheric radio occultation measurements between GPS and low Earth orbiter (LEO satellites. For these purposes, the six FORMOSAT-3/COSMIC (F3/C satellites provide an excellent opportunity to collect EDPs that cover most of the ionospheric region, in particular the F2 layer. For the contents of this paper, F3/C EDPs have been exploited to determine NmF2, hmF2 and HF2 within a regional modeling approach. As mathematical base functions, endpoint-interpolating polynomial B-splines are considered to model the key parameters with respect to longitude, latitude and time. The description of deterministic processes and the verification of this modeling approach have been published previously in Limberger et al. (2013, whereas this paper should be considered as an extension dealing with related correlation studies, a topic to which less attention has been paid in the literature. Relations between the B-spline series coefficients regarding specific key parameters as well as dependencies between the three F2 Chapman key parameters are in the main focus. Dependencies are interpreted from the post-derived correlation matrices as a result of (1 a simulated scenario without data gaps by taking dense, homogenously distributed profiles into account and (2 two real data scenarios on 1 July 2008 and 1 July 2012 including sparsely, inhomogeneously distributed F3/C EDPs. Moderate correlations between hmF2 and HF2 as

  13. Computing options for multiple-trait test-day random regression models while accounting for heat tolerance.

    Science.gov (United States)

    Aguilar, I; Tsuruta, S; Misztal, I

    2010-06-01

    Data included 90,242,799 test day records from first, second and third parities of 5,402,484 Holstein cows and 9,326,754 animals in the pedigree. Additionally, daily temperature humidity indexes (THI) from 202 weather stations were available. The fixed effects included herd test day, age at calving, milking frequency and days in milk classes (DIM). Random effects were additive genetic, permanent environment and herd-year and were fit as random regressions. Covariates included linear splines with four knots at 5, 50, 200 and 305 DIM and a function of THI. Mixed model equations were solved using an iteration on data program with a preconditioned conjugate gradient algorithm. Preconditioners used were diagonal (D), block diagonal due to traits (BT) and block diagonal due to traits and correlated effects (BTCORR). One run included BT with a 'diagonalized' model in which the random effects were reparameterized for diagonal (co)variance matrices among traits (BTDIAG). Memory requirements were 8.7 Gb for D, 10.4 Gb for BT and BTDIAG, and 24.3 Gb for BTCORR. Computing times (rounds) were 14 days (952) for D, 10.7 days (706) for BT, 7.7 days (494) for BTDIAG and 4.6 days (289) for BTCORR. The convergence pattern was strongly influenced by the choice of fixed effects. When sufficient memory is available, the option BTCORR is the fastest and simplest to implement; the next efficient method, BTDIAG, requires additional steps for diagonalization and back-diagonalization.

  14. Regression to Causality

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    Humans are fundamentally primed for making causal attributions based on correlations. This implies that researchers must be careful to present their results in a manner that inhibits unwarranted causal attribution. In this paper, we present the results of an experiment that suggests regression...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  15. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  16. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  17. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests

    Directory of Open Access Journals (Sweden)

    Ayanori Yorozu

    2015-09-01

    Full Text Available Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm   that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON.

  18. An adaptive multi-spline refinement algorithm in simulation based sailboat trajectory optimization using onboard multi-core computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2016-06-01

    Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.

  19. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests

    Science.gov (United States)

    Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki

    2015-01-01

    Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON). PMID:26404302

  20. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.