WorldWideScience

Sample records for regression splines models

  1. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  2. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    Science.gov (United States)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  3. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  4. Statistical analysis of sediment toxicity by additive monotone regression splines

    NARCIS (Netherlands)

    Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.

    2002-01-01

    Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this

  5. Bayesian Analysis for Penalized Spline Regression Using WinBUGS

    Directory of Open Access Journals (Sweden)

    Ciprian M. Crainiceanu

    2005-09-01

    Full Text Available Penalized splines can be viewed as BLUPs in a mixed model framework, which allows the use of mixed model software for smoothing. Thus, software originally developed for Bayesian analysis of mixed models can be used for penalized spline regression. Bayesian inference for nonparametric models enjoys the flexibility of nonparametric models and the exact inference provided by the Bayesian inferential machinery. This paper provides a simple, yet comprehensive, set of programs for the implementation of nonparametric Bayesian analysis in WinBUGS. Good mixing properties of the MCMC chains are obtained by using low-rank thin-plate splines, while simulation times per iteration are reduced employing WinBUGS specific computational tricks.

  6. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  7. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  8. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  9. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.; Carroll, R.J.; Wand, M.P.

    2010-01-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  10. SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1

    Directory of Open Access Journals (Sweden)

    Liviu GEAMBAŞU

    2010-12-01

    Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange

  11. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    Science.gov (United States)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  12. The Norwegian Healthier Goats program--modeling lactation curves using a multilevel cubic spline regression model.

    Science.gov (United States)

    Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S

    2014-07-01

    In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014

  13. Examination of influential observations in penalized spline regression

    Science.gov (United States)

    Türkan, Semra

    2013-10-01

    In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.

  14. PM10 modeling in the Oviedo urban area (Northern Spain) by using multivariate adaptive regression splines

    Science.gov (United States)

    Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza

    2014-10-01

    The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of

  15. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    Science.gov (United States)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  16. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Science.gov (United States)

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  17. ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES

    Directory of Open Access Journals (Sweden)

    Joel Domínguez Viveros

    2015-04-01

    Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year – season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.

  18. Evaluation of Logistic Regression and Multivariate Adaptive Regression Spline Models for Groundwater Potential Mapping Using R and GIS

    Directory of Open Access Journals (Sweden)

    Soyoung Park

    2017-07-01

    Full Text Available This study mapped and analyzed groundwater potential using two different models, logistic regression (LR and multivariate adaptive regression splines (MARS, and compared the results. A spatial database was constructed for groundwater well data and groundwater influence factors. Groundwater well data with a high potential yield of ≥70 m3/d were extracted, and 859 locations (70% were used for model training, whereas the other 365 locations (30% were used for model validation. We analyzed 16 groundwater influence factors including altitude, slope degree, slope aspect, plan curvature, profile curvature, topographic wetness index, stream power index, sediment transport index, distance from drainage, drainage density, lithology, distance from fault, fault density, distance from lineament, lineament density, and land cover. Groundwater potential maps (GPMs were constructed using LR and MARS models and tested using a receiver operating characteristics curve. Based on this analysis, the area under the curve (AUC for the success rate curve of GPMs created using the MARS and LR models was 0.867 and 0.838, and the AUC for the prediction rate curve was 0.836 and 0.801, respectively. This implies that the MARS model is useful and effective for groundwater potential analysis in the study area.

  19. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  20. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...... efficiency of Polish dairy farms contributes to the insight into this dynamic process. Furthermore, we compare and evaluate the results of this spline-based semi-parametric stochastic frontier model with results of other semi-parametric stochastic frontier models and of traditional parametric stochastic...

  1. Median regression spline modeling of longitudinal FEV1 measurements in cystic fibrosis (CF) and chronic obstructive pulmonary disease (COPD) patients.

    Science.gov (United States)

    Conrad, Douglas J; Bailey, Barbara A; Hardie, Jon A; Bakke, Per S; Eagan, Tomas M L; Aarli, Bernt B

    2017-01-01

    Clinical phenotyping, therapeutic investigations as well as genomic, airway secretion metabolomic and metagenomic investigations can benefit from robust, nonlinear modeling of FEV1 in individual subjects. We demonstrate the utility of measuring FEV1 dynamics in representative cystic fibrosis (CF) and chronic obstructive pulmonary disease (COPD) populations. Individual FEV1 data from CF and COPD subjects were modeled by estimating median regression splines and their predicted first and second derivatives. Classes were created from variables that capture the dynamics of these curves in both cohorts. Nine FEV1 dynamic variables were identified from the splines and their predicted derivatives in individuals with CF (n = 177) and COPD (n = 374). Three FEV1 dynamic classes (i.e. stable, intermediate and hypervariable) were generated and described using these variables from both cohorts. In the CF cohort, the FEV1 hypervariable class (HV) was associated with a clinically unstable, female-dominated phenotypes while stable FEV1 class (S) individuals were highly associated with the male-dominated milder clinical phenotype. In the COPD cohort, associations were found between the FEV1 dynamic classes, the COPD GOLD grades, with exacerbation frequency and symptoms. Nonlinear modeling of FEV1 with splines provides new insights and is useful in characterizing CF and COPD clinical phenotypes.

  2. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age

    Directory of Open Access Journals (Sweden)

    Marko Wilke

    2018-02-01

    Full Text Available This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1–75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender as well as technical (field strength, data quality predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php. Keywords: MRI template creation, Multivariate adaptive regression splines, DARTEL, Structural MRI

  3. Geometric and computer-aided spline hob modeling

    Science.gov (United States)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  4. A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.

    Science.gov (United States)

    Li, Chin-Shang; Tu, Wanzhu

    2007-05-01

    In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.

  5. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    Science.gov (United States)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  6. Multidimensional splines for modeling FET nonlinearities

    Energy Technology Data Exchange (ETDEWEB)

    Barby, J A

    1986-01-01

    Circuit simulators like SPICE and timing simulators like MOTIS are used extensively for critical path verification of integrated circuits. MOSFET model evaluation dominates the run time of these simulators. Changes in technology results in costly updates, since modifications require reprogramming of the functions and their derivatives. The computational cost of MOSFET models can be reduced by using multidimensional polynomial splines. Since simulators based on the Newton Raphson algorithm require the function and first derivative, quadratic splines are sufficient for this purpose. The cost of updating the MOSFET model due to technology changes is greatly reduced since splines are derived from a set of points. Crucial for convergence speed of simulators is the fact that MOSFET characteristic equations are monotonic. This must be maintained by any simulation model. The splines the author designed do maintain monotonicity.

  7. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    Science.gov (United States)

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  8. Modelling lecturer performance index of private university in Tulungagung by using survival analysis with multivariate adaptive regression spline

    Science.gov (United States)

    Hasyim, M.; Prastyo, D. D.

    2018-03-01

    Survival analysis performs relationship between independent variables and survival time as dependent variable. In fact, not all survival data can be recorded completely by any reasons. In such situation, the data is called censored data. Moreover, several model for survival analysis requires assumptions. One of the approaches in survival analysis is nonparametric that gives more relax assumption. In this research, the nonparametric approach that is employed is Multivariate Regression Adaptive Spline (MARS). This study is aimed to measure the performance of private university’s lecturer. The survival time in this study is duration needed by lecturer to obtain their professional certificate. The results show that research activities is a significant factor along with developing courses material, good publication in international or national journal, and activities in research collaboration.

  9. Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation

    Directory of Open Access Journals (Sweden)

    Ahmad Bilfarsah

    2005-04-01

    Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.

  10. Point based interactive image segmentation using multiquadrics splines

    Science.gov (United States)

    Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna

    2017-05-01

    Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6

  11. Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...

    African Journals Online (AJOL)

    Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.

  12. Forecasting the daily power output of a grid-connected photovoltaic system based on multivariate adaptive regression splines

    International Nuclear Information System (INIS)

    Li, Yanting; He, Yong; Su, Yan; Shu, Lianjie

    2016-01-01

    Highlights: • Suggests a nonparametric model based on MARS for output power prediction. • Compare the MARS model with a wide variety of prediction models. • Show that the MARS model is able to provide an overall good performance in both the training and testing stages. - Abstract: Both linear and nonlinear models have been proposed for forecasting the power output of photovoltaic systems. Linear models are simple to implement but less flexible. Due to the stochastic nature of the power output of PV systems, nonlinear models tend to provide better forecast than linear models. Motivated by this, this paper suggests a fairly simple nonlinear regression model known as multivariate adaptive regression splines (MARS), as an alternative to forecasting of solar power output. The MARS model is a data-driven modeling approach without any assumption about the relationship between the power output and predictors. It maintains simplicity of the classical multiple linear regression (MLR) model while possessing the capability of handling nonlinearity. It is simpler in format than other nonlinear models such as ANN, k-nearest neighbors (KNN), classification and regression tree (CART), and support vector machine (SVM). The MARS model was applied on the daily output of a grid-connected 2.1 kW PV system to provide the 1-day-ahead mean daily forecast of the power output. The comparisons with a wide variety of forecast models show that the MARS model is able to provide reliable forecast performance.

  13. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  14. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  15. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    Science.gov (United States)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  16. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  17. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    Science.gov (United States)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model

  18. Cubic-spline interpolation to estimate effects of inbreeding on milk yield in first lactation Holstein cows

    Directory of Open Access Journals (Sweden)

    Makram J. Geha

    2011-01-01

    Full Text Available Milk yield records (305d, 2X, actual milk yield of 123,639 registered first lactation Holstein cows were used to compare linear regression (y = β0 + β1X + e ,quadratic regression, (y = β0 + β1X + β2X2 + e cubic regression (y = β0 + β1X + β2X2 + β3X3 + e and fixed factor models, with cubic-spline interpolation models, for estimating the effects of inbreeding on milk yield. Ten animal models, all with herd-year-season of calving as fixed effect, were compared using the Akaike corrected-Information Criterion (AICc. The cubic-spline interpolation model with seven knots had the lowest AICc, whereas for all those labeled as "traditional", AICc was higher than the best model. Results from fitting inbreeding using a cubic-spline with seven knots were compared to results from fitting inbreeding as a linear covariate or as a fixed factor with seven levels. Estimates of inbreeding effects were not significantly different between the cubic-spline model and the fixed factor model, but were significantly different from the linear regression model. Milk yield decreased significantly at inbreeding levels greater than 9%. Variance component estimates were similar for the three models. Ranking of the top 100 sires with daughter records remained unaffected by the model used.

  19. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    Science.gov (United States)

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  1. PEMODELAN REGRESI SPLINE (Studi Kasus: Herpindo Jaya Cabang Ngaliyan

    Directory of Open Access Journals (Sweden)

    I MADE BUDIANTARA PUTRA

    2015-06-01

    Full Text Available Regression analysis is a method of data analysis to describe the relationship between response variables and predictor variables. There are two approaches to estimating the regression function. They are parametric and nonparametric approaches. The parametric approach is used when the relationship between the predictor variables and the response variables are known or the shape of the regression curve is known. Meanwhile, the nonparametric approach is used when the form of the relationship between the response and predictor variables is unknown or no information about the form of the regression function. The aim of this study are to determine the best spline nonparametric regression model on data of quality of the product, price, and advertising on purchasing decisions of Yamaha motorcycle with optimal knots point and to compare it with the multiple regression linear based on the coefficient of determination (R2 and mean square error (MSE. Optimal knot points are defined by two point knots. The result of this analysis is that for this data multiple regression linear is better than the spline regression one.

  2. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  3. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  4. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  5. Estimating trajectories of energy intake through childhood and adolescence using linear-spline multilevel models.

    Science.gov (United States)

    Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D

    2013-07-01

    Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.

  6. Modeling terminal ballistics using blending-type spline surfaces

    Science.gov (United States)

    Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune

    2014-12-01

    We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.

  7. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    Science.gov (United States)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  8. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Science.gov (United States)

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  9. Spline-procedures

    International Nuclear Information System (INIS)

    Schmidt, R.

    1976-12-01

    This report contains a short introduction to spline functions as well as a complete description of the spline procedures presently available in the HMI-library. These include polynomial splines (using either B-splines or one-sided basis representations) and natural splines, as well as their application to interpolation, quasiinterpolation, L 2 -, and Tchebycheff approximation. Special procedures are included for the case of cubic splines. Complete test examples with input and output are provided for each of the procedures. (orig.) [de

  10. A Note on Penalized Regression Spline Estimation in the Secondary Analysis of Case-Control Data

    KAUST Repository

    Gazioglu, Suzan

    2013-05-25

    Primary analysis of case-control studies focuses on the relationship between disease (D) and a set of covariates of interest (Y, X). A secondary application of the case-control study, often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated due to the case-control sampling, and to avoid the biased sampling that arises from the design, it is typical to use the control data only. In this paper, we develop penalized regression spline methodology that uses all the data, and improves precision of estimation compared to using only the controls. A simulation study and an empirical example are used to illustrate the methodology.

  11. A Note on Penalized Regression Spline Estimation in the Secondary Analysis of Case-Control Data

    KAUST Repository

    Gazioglu, Suzan; Wei, Jiawei; Jennings, Elizabeth M.; Carroll, Raymond J.

    2013-01-01

    Primary analysis of case-control studies focuses on the relationship between disease (D) and a set of covariates of interest (Y, X). A secondary application of the case-control study, often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated due to the case-control sampling, and to avoid the biased sampling that arises from the design, it is typical to use the control data only. In this paper, we develop penalized regression spline methodology that uses all the data, and improves precision of estimation compared to using only the controls. A simulation study and an empirical example are used to illustrate the methodology.

  12. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    Science.gov (United States)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  13. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  14. PEMODELAN B-SPLINE DAN MARS PADA NILAI UJIAN MASUK TERHADAP IPK MAHASISWA JURUSAN DISAIN KOMUNIKASI VISUAL UK. PETRA SURABAYA

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2006-01-01

    Full Text Available Regression analysis is constructed for capturing the influences of independent variables to dependent ones. It can be done by looking at the relationship between those variables. This task of approximating the mean function can be done essentially in two ways. The quiet often use parametric approach is to assume that the mean curve has some prespecified functional forms. Alternatively, nonparametric approach, .i.e., without reference to a specific form, is used when there is no information of the regression function form (Haerdle, 1990. Therefore nonparametric approach has more flexibilities than the parametric one. The aim of this research is to find the best fit model that captures relationship between admission test score to the GPA. This particular data was taken from the Department of Design Communication and Visual, Petra Christian University, Surabaya for year 1999. Those two approaches were used here. In the parametric approach, we use simple linear, quadric cubic regression, and in the nonparametric ones, we use B-Spline and Multivariate Adaptive Regression Splines (MARS. Overall, the best model was chosen based on the maximum determinant coefficient. However, for MARS, the best model was chosen based on the GCV, minimum MSE, maximum determinant coefficient. Abstract in Bahasa Indonesia : Analisa regresi digunakan untuk melihat pengaruh variabel independen terhadap variabel dependent dengan terlebih dulu melihat pola hubungan variabel tersebut. Hal ini dapat dilakukan dengan melalui dua pendekatan. Pendekatan yang paling umum dan seringkali digunakan adalah pendekatan parametrik. Pendekatan parametrik mengasumsikan bentuk model sudah ditentukan. Apabila tidak ada informasi apapun tentang bentuk dari fungsi regresi, maka pendekatan yang digunakan adalah pendekatan nonparametrik. (Haerdle, 1990. Karena pendekatan tidak tergantung pada asumsi bentuk kurva tertentu, sehingga memberikan fleksibelitas yang lebih besar. Tujuan penelitian ini

  15. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    Science.gov (United States)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated

  16. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  17. Color management with a hammer: the B-spline fitter

    Science.gov (United States)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  18. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  19. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  20. Joint surface modeling with thin-plate splines.

    Science.gov (United States)

    Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D

    1999-10-01

    Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.

  1. Interpolating cubic splines

    CERN Document Server

    Knott, Gary D

    2000-01-01

    A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi­ cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi­ nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...

  2. Fingerprint Matching by Thin-plate Spline Modelling of Elastic Deformations

    NARCIS (Netherlands)

    Bazen, A.M.; Gerez, Sabih H.

    2003-01-01

    This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching

  3. LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Dokken

    2015-08-01

    Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  4. Spline and spline wavelet methods with applications to signal and image processing

    CERN Document Server

    Averbuch, Amir Z; Zheludev, Valery A

    This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...

  5. Regression analysis of informative current status data with the additive hazards model.

    Science.gov (United States)

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  6. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  7. Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.

  8. Integration of association statistics over genomic regions using Bayesian adaptive regression splines

    Directory of Open Access Journals (Sweden)

    Zhang Xiaohua

    2003-11-01

    Full Text Available Abstract In the search for genetic determinants of complex disease, two approaches to association analysis are most often employed, testing single loci or testing a small group of loci jointly via haplotypes for their relationship to disease status. It is still debatable which of these approaches is more favourable, and under what conditions. The former has the advantage of simplicity but suffers severely when alleles at the tested loci are not in linkage disequilibrium (LD with liability alleles; the latter should capture more of the signal encoded in LD, but is far from simple. The complexity of haplotype analysis could be especially troublesome for association scans over large genomic regions, which, in fact, is becoming the standard design. For these reasons, the authors have been evaluating statistical methods that bridge the gap between single-locus and haplotype-based tests. In this article, they present one such method, which uses non-parametric regression techniques embodied by Bayesian adaptive regression splines (BARS. For a set of markers falling within a common genomic region and a corresponding set of single-locus association statistics, the BARS procedure integrates these results into a single test by examining the class of smooth curves consistent with the data. The non-parametric BARS procedure generally finds no signal when no liability allele exists in the tested region (ie it achieves the specified size of the test and it is sensitive enough to pick up signals when a liability allele is present. The BARS procedure provides a robust and potentially powerful alternative to classical tests of association, diminishes the multiple testing problem inherent in those tests and can be applied to a wide range of data types, including genotype frequencies estimated from pooled samples.

  9. Gamma Splines and Wavelets

    Directory of Open Access Journals (Sweden)

    Hannu Olkkonen

    2013-01-01

    Full Text Available In this work we introduce a new family of splines termed as gamma splines for continuous signal approximation and multiresolution analysis. The gamma splines are born by -times convolution of the exponential by itself. We study the properties of the discrete gamma splines in signal interpolation and approximation. We prove that the gamma splines obey the two-scale equation based on the polyphase decomposition. to introduce the shift invariant gamma spline wavelet transform for tree structured subscale analysis of asymmetric signal waveforms and for systems with asymmetric impulse response. Especially we consider the applications in biomedical signal analysis (EEG, ECG, and EMG. Finally, we discuss the suitability of the gamma spline signal processing in embedded VLSI environment.

  10. A free-knot spline modeling framework for piecewise linear logistic regression in complex samples with body mass index and mortality as an example

    Directory of Open Access Journals (Sweden)

    Scott W. Keith

    2014-09-01

    Full Text Available This paper details the design, evaluation, and implementation of a framework for detecting and modeling nonlinearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and nonparametric bootstrapping. Unlike other nonlinear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI; kg/m2 and the complex multi-stage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic nonlinear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and nonlinearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC and Akaike’s Information Criterion (AIC, in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions.

  11. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  12. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    Science.gov (United States)

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2016-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.

  13. Multivariate Adaptative Regression Splines (MARS, una alternativa para el análisis de series de tiempo

    Directory of Open Access Journals (Sweden)

    Jairo Vanegas

    2017-05-01

    Full Text Available Multivariate Adaptative Regression Splines (MARS es un método de modelación no paramétrico que extiende el modelo lineal incorporando no linealidades e interacciones de variables. Es una herramienta flexible que automatiza la construcción de modelos de predicción, seleccionando variables relevantes, transformando las variables predictoras, tratando valores perdidos y previniendo sobreajustes mediante un autotest. También permite predecir tomando en cuenta factores estructurales que pudieran tener influencia sobre la variable respuesta, generando modelos hipotéticos. El resultado final serviría para identificar puntos de corte relevantes en series de datos. En el área de la salud es poco utilizado, por lo que se propone como una herramienta más para la evaluación de indicadores relevantes en salud pública. Para efectos demostrativos se utilizaron series de datos de mortalidad de menores de 5 años de Costa Rica en el periodo 1978-2008.

  14. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  15. Microscopic Model of Automobile Lane-changing Virtual Desire Trajectory by Spline Curves

    Directory of Open Access Journals (Sweden)

    Yulong Pei

    2010-05-01

    Full Text Available With the development of microscopic traffic simulation models, they have increasingly become an important tool for transport system analysis and management, which assist the traffic engineer to investigate and evaluate the performance of transport network systems. Lane-changing model is a vital component in any traffic simulation model, which could improve road capacity and reduce vehicles delay so as to reduce the likelihood of congestion occurrence. Therefore, this paper addresses the virtual desire trajectory, a vital part to investigate the behaviour divided into four phases. Based on the boundary conditions, β-spline curves and the corresponding reverse algorithm are introduced firstly. Thus, the relation between the velocity and length of lane-changing is constructed, restricted by the curvature, steering velocity and driving behaviour. Then the virtual desire trajectory curves are presented by Matlab and the error analysis results prove that this proposed description model has higher precision in automobile lane-changing process reconstruction, compared with the surveyed result. KEY WORDS: traffic simulation, lane-changing model, virtual desire trajectory, β-spline curves, driving behaviour

  16. Designing interactively with elastic splines

    DEFF Research Database (Denmark)

    Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie

    2018-01-01

    We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....

  17. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    Science.gov (United States)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  18. Spline techniques for magnetic fields

    International Nuclear Information System (INIS)

    Aspinall, J.G.

    1984-01-01

    This report is an overview of B-spline techniques, oriented toward magnetic field computation. These techniques form a powerful mathematical approximating method for many physics and engineering calculations. In section 1, the concept of a polynomial spline is introduced. Section 2 shows how a particular spline with well chosen properties, the B-spline, can be used to build any spline. In section 3, the description of how to solve a simple spline approximation problem is completed, and some practical examples of using splines are shown. All these sections deal exclusively in scalar functions of one variable for simplicity. Section 4 is partly digression. Techniques that are not B-spline techniques, but are closely related, are covered. These methods are not needed for what follows, until the last section on errors. Sections 5, 6, and 7 form a second group which work toward the final goal of using B-splines to approximate a magnetic field. Section 5 demonstrates how to approximate a scalar function of many variables. The necessary mathematics is completed in section 6, where the problems of approximating a vector function in general, and a magnetic field in particular, are examined. Finally some algorithms and data organization are shown in section 7. Section 8 deals with error analysis

  19. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  20. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  1. Multivariate Epi-splines and Evolving Function Identification Problems

    Science.gov (United States)

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  2. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  3. Spline Truncated Multivariabel pada Permodelan Nilai Ujian Nasional di Kabupaten Lombok Barat

    Directory of Open Access Journals (Sweden)

    Nurul Fitriyani

    2017-12-01

    Full Text Available Regression model is used to analyze the relationship between dependent variable and independent variable. If the regression curve form is not known, then the regression curve estimation can be done by nonparametric regression approach. This study aimed to investigate the relationship between the value resulted by National Examination and the factors that influence it. The statistical analysis used was multivariable truncated spline, in order to analyze the relationship between variables. The research that has been done showed that the best model obtained by using three knot points. This model produced a minimum GCV value of 44.46 and the value of determination coefficient of 58.627%. The parameter test showed that all factors used were significantly influence the National Examination Score for Senior High School students in West Lombok Regency year 2017. The variables were as follows: National Examination Score of Junior High School; School or Madrasah Examination Score; the value of Student’s Report Card; Student’s House Distance to School; and Number of Student’s Siblings.

  4. Assessment of susceptibility to earth-flow landslide using logistic regression and multivariate adaptive regression splines: A case of the Belice River basin (western Sicily, Italy)

    Science.gov (United States)

    Conoscenti, Christian; Ciaccio, Marilena; Caraballo-Arias, Nathalie Almaru; Gómez-Gutiérrez, Álvaro; Rotigliano, Edoardo; Agnesi, Valerio

    2015-08-01

    In this paper, terrain susceptibility to earth-flow occurrence was evaluated by using geographic information systems (GIS) and two statistical methods: Logistic regression (LR) and multivariate adaptive regression splines (MARS). LR has been already demonstrated to provide reliable predictions of earth-flow occurrence, whereas MARS, as far as we know, has never been used to generate earth-flow susceptibility models. The experiment was carried out in a basin of western Sicily (Italy), which extends for 51 km2 and is severely affected by earth-flows. In total, we mapped 1376 earth-flows, covering an area of 4.59 km2. To explore the effect of pre-failure topography on earth-flow spatial distribution, we performed a reconstruction of topography before the landslide occurrence. This was achieved by preparing a digital terrain model (DTM) where altitude of areas hosting landslides was interpolated from the adjacent undisturbed land surface by using the algorithm topo-to-raster. This DTM was exploited to extract 15 morphological and hydrological variables that, in addition to outcropping lithology, were employed as explanatory variables of earth-flow spatial distribution. The predictive skill of the earth-flow susceptibility models and the robustness of the procedure were tested by preparing five datasets, each including a different subset of landslides and stable areas. The accuracy of the predictive models was evaluated by drawing receiver operating characteristic (ROC) curves and by calculating the area under the ROC curve (AUC). The results demonstrate that the overall accuracy of LR and MARS earth-flow susceptibility models is from excellent to outstanding. However, AUC values of the validation datasets attest to a higher predictive power of MARS-models (AUC between 0.881 and 0.912) with respect to LR-models (AUC between 0.823 and 0.870). The adopted procedure proved to be resistant to overfitting and stable when changes of the learning and validation samples are

  5. Modeling the dispersion of atmospheric pollution using cubic splines and chapeau functions

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W; Kern, C D; Long, P E

    1979-01-01

    Two methods that can be used to solve complex, three-dimensional, advection-diffusion transport equations are investigated. A quasi-Lagrangian cubic spline method and a chapeau function method are compared in advecting a passive scalar. The methods are simple to use, computationally fast, and reasonably accurate. Little numerical dissipation is manifested by the schemes. In simple advection tests with equal mesh spacing, the chapeau function method maintains slightly more accurate peak values than the cubic spline method. In tests with unequal mesh spacing, the cubic spline method has less noise, but slightly more damping than the standard chapeau method has. Both cubic splines and chapeau functions can be used to solve the three-dimensional problem of gaseous emissions dispersion without excessive programing complexity or storage requirements. (10 diagrams, 39 references, 2 tables)

  6. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Energy Technology Data Exchange (ETDEWEB)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  7. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    International Nuclear Information System (INIS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-01-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested

  8. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Science.gov (United States)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  9. Isogeometric analysis using T-splines

    KAUST Repository

    Bazilevs, Yuri

    2010-01-01

    We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.

  10. Regional Densification of a Global VTEC Model Based on B-Spline Representations

    Science.gov (United States)

    Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas

    2017-04-01

    The project OPTIMAP is a joint initiative of the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal of the project is the development of an operational tool for ionospheric mapping and prediction (OPTIMAP). Two key features of the project are the combination of different satellite observation techniques (GNSS, satellite altimetry, radio occultations and DORIS) and the regional densification as a remedy against problems encountered with the inhomogeneous data distribution. Since the data from space-geoscientific mission which can be used for modeling ionospheric parameters, such as the Vertical Total Electron Content (VTEC) or the electron density, are distributed rather unevenly over the globe at different altitudes, appropriate modeling approaches have to be developed to handle this inhomogeneity. Our approach is based on a two-level strategy. To be more specific, in the first level we compute a global VTEC model with a moderate regional and spectral resolution which will be complemented in the second level by a regional model in a densification area. The latter is a region characterized by a dense data distribution to obtain a high spatial and spectral resolution VTEC product. Additionally, the global representation means a background model for the regional one to avoid edge effects at the boundaries of the densification area. The presented approach based on a global and a regional model part, i.e. the consideration of a regional densification is called the Two-Level VTEC Model (TLVM). The global VTEC model part is based on a series expansion in terms of polynomial B-Splines in latitude direction and trigonometric B-Splines in longitude direction. The additional regional model part is set up by a series expansion in terms of polynomial B-splines for

  11. Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering

    Directory of Open Access Journals (Sweden)

    Hyungjun Park

    2015-01-01

    Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.

  12. A fourth order spline collocation approach for a business cycle model

    Science.gov (United States)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  13. Automatic Shape Control of Triangular B-Splines of Arbitrary Topology

    Institute of Scientific and Technical Information of China (English)

    Ying He; Xian-Feng Gu; Hong Qin

    2006-01-01

    Triangular B-splines are powerful and flexible in modeling a broader class of geometric objects defined over arbitrary, non-rectangular domains. Despite their great potential and advantages in theory, practical techniques and computational tools with triangular B-splines are less-developed. This is mainly because users have to handle a large number of irregularly distributed control points over arbitrary triangulation. In this paper, an automatic and efficient method is proposed to generate visually pleasing, high-quality triangular B-splines of arbitrary topology. The experimental results on several real datasets show that triangular B-splines are powerful and effective in both theory and practice.

  14. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  15. Comparative Performance of Complex-Valued B-Spline and Polynomial Models Applied to Iterative Frequency-Domain Decision Feedback Equalization of Hammerstein Channels.

    Science.gov (United States)

    Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J

    2017-12-01

    Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.

  16. Symmetric, discrete fractional splines and Gabor systems

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2006-01-01

    In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....

  17. Deconvolution using thin-plate splines

    International Nuclear Information System (INIS)

    Toussaint, Udo v.; Gori, Silvio

    2007-01-01

    The ubiquitous problem of estimating 2-dimensional profile information from a set of line integrated measurements is tackled with Bayesian probability theory by exploiting prior information about local smoothness. For this purpose thin-plate-splines (the 2-D minimal curvature analogue of cubic-splines in 1-D) are employed. The optimal number of support points required for inversion of 2-D tomographic problems is determined using model comparison. Properties of this approach are discussed and the question of suitable priors is addressed. Finally, we illustrated the properties of this approach with 2-D inversion results using data from line-integrated measurements from fusion experiments

  18. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  19. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  20. Interpolation of natural cubic spline

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    1992-01-01

    Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.

  1. A simultaneous confidence band for sparse longitudinal regression

    KAUST Repository

    Ma, Shujie; Yang, Lijian; Carroll, Raymond J.

    2012-01-01

    Functional data analysis has received considerable recent attention and a number of successful applications have been reported. In this paper, asymptotically simultaneous confidence bands are obtained for the mean function of the functional regression model, using piecewise constant spline estimation. Simulation experiments corroborate the asymptotic theory. The confidence band procedure is illustrated by analyzing CD4 cell counts of HIV infected patients.

  2. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    Science.gov (United States)

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  3. Study of cyanotoxins presence from experimental cyanobacteria concentrations using a new data mining methodology based on multivariate adaptive regression splines in Trasona reservoir (Northern Spain).

    Science.gov (United States)

    Garcia Nieto, P J; Sánchez Lasheras, F; de Cos Juez, F J; Alonso Fernández, J R

    2011-11-15

    There is an increasing need to describe cyanobacteria blooms since some cyanobacteria produce toxins, termed cyanotoxins. These latter can be toxic and dangerous to humans as well as other animals and life in general. It must be remarked that the cyanobacteria are reproduced explosively under certain conditions. This results in algae blooms, which can become harmful to other species if the cyanobacteria involved produce cyanotoxins. In this research work, the evolution of cyanotoxins in Trasona reservoir (Principality of Asturias, Northern Spain) was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. The results of the present study are two-fold. On one hand, the importance of the different kind of cyanobacteria over the presence of cyanotoxins in the reservoir is presented through the MARS model and on the other hand a predictive model able to forecast the possible presence of cyanotoxins in a short term was obtained. The agreement of the MARS model with experimental data confirmed the good performance of the same one. Finally, conclusions of this innovative research are exposed. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Image edges detection through B-Spline filters

    International Nuclear Information System (INIS)

    Mastropiero, D.G.

    1997-01-01

    B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab

  5. Straight-sided Spline Optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using design modifications, that do not change the spline load carrying capacity, it is shown that large...

  6. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.

    2010-01-01

    Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.

  7. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    Science.gov (United States)

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.

  8. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  9. On Characterization of Quadratic Splines

    DEFF Research Database (Denmark)

    Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong

    2005-01-01

    that the representation can be refined in a neighborhood of a non-degenerate point and a set of non-degenerate minimizers. Based on these characterizations, many existing algorithms for specific convex quadratic splines are also finite convergent for a general convex quadratic spline. Finally, we study the relationship...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general....

  10. B-splines and Faddeev equations

    International Nuclear Information System (INIS)

    Huizing, A.J.

    1990-01-01

    Two numerical methods for solving the three-body equations describing relativistic pion deuteron scattering have been investigated. For separable two body interactions these equations form a set of coupled one-dimensional integral equations. They are plagued by singularities which occur in the kernel of the integral equations as well as in the solution. The methods to solve these equations differ in the way they treat the singularities. First the Fuda-Stuivenberg method is discussed. The basic idea of this method is an one time iteration of the set of integral equations to treat the logarithmic singularities. In the second method, the spline method, the unknown solution is approximated by splines. Cubic splines have been used with cubic B-splines as basis. If the solution is approximated by a linear combination of basis functions, an integral equation can be transformed into a set of linear equations for the expansion coefficients. This set of linear equations is solved by standard means. Splines are determined by points called knots. A proper choice of splines to approach the solution stands for a proper choice of the knots. The solution of the three-body scattering equations has a square root behaviour at a certain point. Hence it was investigated how the knots should be chosen to approximate the square root function by cubic B-splines in an optimal way. Before applying this method to solve numerically the three-body equations describing pion-deuteron scattering, an analytically solvable example has been constructed with a singularity structure of both kernel and solution comparable to those of the three-body equations. The accuracy of the numerical solution was determined to a large extent by the accuracy of the approximation of the square root part. The results for a pion laboratory energy of 47.4 MeV agree very well with those from literature. In a complete calculation for 47.7 MeV the spline method turned out to be a factor thousand faster than the Fuda

  11. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  12. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Crecy, F de

    1994-12-31

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs.

  13. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  14. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael

    2015-10-24

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  15. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2015-01-01

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  16. Schwarz and multilevel methods for quadratic spline collocation

    Energy Technology Data Exchange (ETDEWEB)

    Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  17. B-spline Collocation with Domain Decomposition Method

    International Nuclear Information System (INIS)

    Hidayat, M I P; Parman, S; Ariwahjoedi, B

    2013-01-01

    A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.

  18. Applications of the spline filter for areal filtration

    International Nuclear Information System (INIS)

    Tong, Mingsi; Zhang, Hao; Ott, Daniel; Chu, Wei; Song, John

    2015-01-01

    This paper proposes a general use isotropic areal spline filter. This new areal spline filter can achieve isotropy by approximating the transmission characteristic of the Gaussian filter. It can also eliminate the effect of void areas using a weighting factor, and resolve end-effect issues by applying new boundary conditions, which replace the first order finite difference in the traditional spline formulation. These improvements make the spline filter widely applicable to 3D surfaces and extend the applications of the spline filter in areal filtration. (technical note)

  19. Construction of local integro quintic splines

    Directory of Open Access Journals (Sweden)

    T. Zhanlav

    2016-06-01

    Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.

  20. Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2017-06-01

    Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The

  1. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  2. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines

    Science.gov (United States)

    Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya

    2017-11-01

    Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1

  3. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  4. Function approximation with polynomial regression slines

    International Nuclear Information System (INIS)

    Urbanski, P.

    1996-01-01

    Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)

  5. Optimization of straight-sided spline design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using different design modifications, that do not change the spline load carrying capacity, it is shown...

  6. Data assimilation using Bayesian filters and B-spline geological models

    KAUST Repository

    Duan, Lian

    2011-04-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  7. Data assimilation using Bayesian filters and B-spline geological models

    International Nuclear Information System (INIS)

    Duan Lian; Farmer, Chris; Hoteit, Ibrahim; Luo Xiaodong; Moroz, Irene

    2011-01-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  8. An isogeometric boundary element method for electromagnetic scattering with compatible B-spline discretizations

    Science.gov (United States)

    Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.

    2018-06-01

    We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.

  9. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  10. Differential constraints for bounded recursive identification with multivariate splines

    NARCIS (Netherlands)

    De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2011-01-01

    The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able

  11. Comparison of parametric, orthogonal, and spline functions to model individual lactation curves for milk yield in Canadian Holsteins

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-11-01

    Full Text Available Test day records for milk yield of 57,390 first lactation Canadian Holsteins were analyzed with a linear model that included the fixed effects of herd-test date and days in milk (DIM interval nested within age and calving season. Residuals from this model were analyzed as a new variable and fitted with a five parameter model, fourth-order Legendre polynomials, with linear, quadratic and cubic spline models with three knots. The fit of the models was rather poor, with about 30-40% of the curves showing an adjusted R-square lower than 0.20 across all models. Results underline a great difficulty in modelling individual deviations around the mean curve for milk yield. However, the Ali and Schaeffer (5 parameter model and the fourth-order Legendre polynomials were able to detect two basic shapes of individual deviations among the mean curve. Quadratic and, especially, cubic spline functions had better fitting performances but a poor predictive ability due to their great flexibility that results in an abrupt change of the estimated curve when data are missing. Parametric and orthogonal polynomials seem to be robust and affordable under this standpoint.

  12. A smoothing spline that approximates Laplace transform functions only known on measurements on the real axis

    International Nuclear Information System (INIS)

    D’Amore, L; Campagna, R; Murli, A; Galletti, A; Marcellino, L

    2012-01-01

    The scientific and application-oriented interest in the Laplace transform and its inversion is testified by more than 1000 publications in the last century. Most of the inversion algorithms available in the literature assume that the Laplace transform function is available everywhere. Unfortunately, such an assumption is not fulfilled in the applications of the Laplace transform. Very often, one only has a finite set of data and one wants to recover an estimate of the inverse Laplace function from that. We propose a fitting model of data. More precisely, given a finite set of measurements on the real axis, arising from an unknown Laplace transform function, we construct a dth degree generalized polynomial smoothing spline, where d = 2m − 1, such that internally to the data interval it is a dth degree polynomial complete smoothing spline minimizing a regularization functional, and outside the data interval, it mimics the Laplace transform asymptotic behavior, i.e. it is a rational or an exponential function (the end behavior model), and at the boundaries of the data set it joins with regularity up to order m − 1, with the end behavior model. We analyze in detail the generalized polynomial smoothing spline of degree d = 3. This choice was motivated by the (ill)conditioning of the numerical computation which strongly depends on the degree of the complete spline. We prove existence and uniqueness of this spline. We derive the approximation error and give a priori and computable bounds of it on the whole real axis. In such a way, the generalized polynomial smoothing spline may be used in any real inversion algorithm to compute an approximation of the inverse Laplace function. Experimental results concerning Laplace transform approximation, numerical inversion of the generalized polynomial smoothing spline and comparisons with the exponential smoothing spline conclude the work. (paper)

  13. Stabilized Discretization in Spline Element Method for Solution of Two-Dimensional Navier-Stokes Problems

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.

  14. Supremum Norm Posterior Contraction and Credible Sets for Nonparametric Multivariate Regression

    NARCIS (Netherlands)

    Yoo, W.W.; Ghosal, S

    2016-01-01

    In the setting of nonparametric multivariate regression with unknown error variance, we study asymptotic properties of a Bayesian method for estimating a regression function f and its mixed partial derivatives. We use a random series of tensor product of B-splines with normal basis coefficients as a

  15. Inflation Rate Modelling in Indonesia

    Directory of Open Access Journals (Sweden)

    Rezzy Eko Caraka

    2016-10-01

    Full Text Available The purposes of this research were to analyse: (i Modelling the inflation rate in Indonesia with parametric regression. (ii Modelling the inflation rate in Indonesia using non-parametric regression spline multivariable (iii Determining the best model the inflation rate in Indonesia (iv Explaining the relationship inflation model parametric and non-parametric regression spline multivariable. Based on the analysis using the two methods mentioned the coefficient of determination (R2 in parametric regression of 65.1% while non-parametric amounted to 99.39%. To begin with, the factor of money supply or money stock, crude oil prices and the rupiah exchange rate against the dollar is significant on the rate of inflation. The stability of inflation is essential to support sustainable economic development and improve people's welfare. In conclusion, unstable inflation will complicate business planning business activities, both in production and investment activities as well as in the pricing of goods and services produced.DOI: 10.15408/etk.v15i2.3260

  16. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  17. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng; Zhou, Lan; Huang, Jianhua Z.; Hä rdle, Wolfgang Karl

    2013-01-01

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  18. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng

    2013-11-05

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  19. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  20. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  1. Higher order multipoles and splines in plasma simulations

    International Nuclear Information System (INIS)

    Okuda, H.; Cheng, C.Z.

    1978-01-01

    The reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and the spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular the spline method may be useful in three-dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length. (Auth.)

  2. Higher-order multipoles and splines in plasma simulations

    International Nuclear Information System (INIS)

    Okuda, H.; Cheng, C.Z.

    1977-12-01

    Reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular, spline method may be useful in three dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length

  3. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    Science.gov (United States)

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (Pbisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  4. USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Otgon Cristian

    2011-07-01

    Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third

  5. Thin-plate spline quadrature of geodetic integrals

    Science.gov (United States)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  6. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  7. T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM

    Directory of Open Access Journals (Sweden)

    Zhenhua Han

    2017-10-01

    Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.

  8. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  9. A Bézier-Spline-based Model for the Simulation of Hysteresis in Variably Saturated Soil

    Science.gov (United States)

    Cremer, Clemens; Peche, Aaron; Thiele, Luisa-Bianca; Graf, Thomas; Neuweiler, Insa

    2017-04-01

    Most transient variably saturated flow models neglect hysteresis in the p_c-S-relationship (Beven, 2012). Such models tend to inadequately represent matrix potential and saturation distribution. Thereby, when simulating flow and transport processes, fluid and solute fluxes might be overestimated (Russo et al., 1989). In this study, we present a simple, computationally efficient and easily applicable model that enables to adequately describe hysteresis in the p_c-S-relationship for variably saturated flow. This model can be seen as an extension to the existing play-type model (Beliaev and Hassanizadeh, 2001), where scanning curves are simplified as vertical lines between main imbibition and main drainage curve. In our model, we use continuous linear and Bézier-Spline-based functions. We show the successful validation of the model by numerically reproducing a physical experiment by Gillham, Klute and Heermann (1976) describing primary drainage and imbibition in a vertical soil column. With a deviation of 3%, the simple Bézier-Spline-based model performs significantly better that the play-type approach, which deviates by 30% from the experimental results. Finally, we discuss the realization of physical experiments in order to extend the model to secondary scanning curves and in order to determine scanning curve steepness. {Literature} Beven, K.J. (2012). Rainfall-Runoff-Modelling: The Primer. John Wiley and Sons. Russo, D., Jury, W. A., & Butters, G. L. (1989). Numerical analysis of solute transport during transient irrigation: 1. The effect of hysteresis and profile heterogeneity. Water Resources Research, 25(10), 2109-2118. https://doi.org/10.1029/WR025i010p02109. Beliaev, A.Y. & Hassanizadeh, S.M. (2001). A Theoretical Model of Hysteresis and Dynamic Effects in the Capillary Relation for Two-phase Flow in Porous Media. Transport in Porous Media 43: 487. doi:10.1023/A:1010736108256. Gillham, R., Klute, A., & Heermann, D. (1976). Hydraulic properties of a porous

  10. BRGLM, Interactive Linear Regression Analysis by Least Square Fit

    International Nuclear Information System (INIS)

    Ringland, J.T.; Bohrer, R.E.; Sherman, M.E.

    1985-01-01

    1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems

  11. Detrending of non-stationary noise data by spline techniques

    International Nuclear Information System (INIS)

    Behringer, K.

    1989-11-01

    An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs

  12. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  13. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    Science.gov (United States)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  14. Illumination estimation via thin-plate spline interpolation.

    Science.gov (United States)

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  15. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    Science.gov (United States)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  16. A comparison of tripolar concentric ring electrode and spline Laplacians on a four-layer concentric spherical model.

    Science.gov (United States)

    Liu, Xiang; Makeyev, Oleksandr; Besio, Walter

    2011-01-01

    We have simulated a four-layer concentric spherical head model. We calculated the spline and tripolar Laplacian estimates and compared them to the analytical Laplacian on the spherical surface. In the simulations we used five different dipole groups and two electrode configurations. The comparison shows that the tripolar Laplacian has higher correlation coefficient to the analytical Laplacian in the electrode configurations tested (19, standard 10/20 locations and 64 electrodes).

  17. About some properties of bivariate splines with shape parameters

    Science.gov (United States)

    Caliò, F.; Marchetti, E.

    2017-07-01

    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  18. Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System

    Directory of Open Access Journals (Sweden)

    Guang Zhao

    2008-01-01

    Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.

  19. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  20. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    Science.gov (United States)

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  1. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    Science.gov (United States)

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  2. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    Science.gov (United States)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates

  3. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    Science.gov (United States)

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  4. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  5. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  6. Geometric modelling of channel present in reservoir petroleum using Bezier splines; Modelagem da geometria de paleocanais presentes em reservatorios petroliferos usando splines de Bezier

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Carlos Eduardo S. [Universidade Federal de Campina Grande, PB (Brazil). Programa de Recursos Humanos 25 da ANP]. E-mail: carlos@dme.ufcg.edu.br; Silva, Rosana M. da [Universidade Federal de Campina Grande, PB (Brazil). Dept. de Matematica e Estatistica]. E-mail: rosana@dme.ufcg.edu.br

    2004-07-01

    This work presents an implementation of a synthetic model of a channel found in oil reservoir. The generation these models is one of the steps to the characterization and simulation of the equal probable three-dimensional geological scenery. O implemented model was obtained from fitting techniques of geometric modeling of curves and surfaces to the geological parameters (width, thickness, sinuosity and preferential direction) that defines the form to be modeled. The parameter sinuosity is related with the parameter wave length and the local amplitude of the channel, the parameter preferential direction indicates the way of the flow and the declivity of the channel. The modeling technique used to represent the surface of the channel is the sweeping technique, the consist in effectuate a translation operation from a curve along a guide curve. The guide curve, in our implementation, was generated by the interpolation of points obtained form sampled values or simulated of the parameter sinuosity, using the cubic splines of Bezier technique. A semi-ellipse, determinate by the parameter width and thickness, representing a transversal section of the channel, is the transferred curve through the guide curve, generating the channel surface. (author)

  7. The modeling of quadratic B-splines surfaces for the tomographic reconstruction in the FCC- type-riser

    International Nuclear Information System (INIS)

    Vasconcelos, Geovane Vitor; Dantas, Carlos Costa; Melo, Silvio de Barros; Pires, Renan Ferraz

    2009-01-01

    The 3D tomography reconstruction has been a profitable alternative in the analysis of the FCC-type- riser (Fluid Catalytic Cracking), for appropriately keeping track of the sectional catalyst concentration distribution in the process of oil refining. The method of tomography reconstruction proposed by M. Azzi and colleagues (1991) uses a relatively small amount of trajectories (from 3 to 5) and projections (from 5 to 7) of gamma rays, a desirable feature in the industrial process tomography. Compared to more popular methods, such as the FBP (Filtered Back Projection), which demands a much higher amount of gamma rays projections, the method by Azzi et al. is more appropriate for the industrial process, where the physical limitations and the cost of the process require more economical arrangements. The use of few projections and trajectories facilitates the diagnosis in the flow dynamical process. This article proposes an improvement in the basis functions introduced by Azzi et al., through the use of quadratic B-splines functions. The use of B-splines functions makes possible a smoother surface reconstruction of the density distribution, since the functions are continuous and smooth. This work describes how the modeling can be done. (author)

  8. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  9. SPLPKG WFCMPR WFAPPX, Wilson-Fowler Spline Generator for Computer Aided Design And Manufacturing (CAD/CAM) Systems

    International Nuclear Information System (INIS)

    Fletcher, S.K.

    2002-01-01

    1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes

  10. Regression Models for Market-Shares

    DEFF Research Database (Denmark)

    Birch, Kristina; Olsen, Jørgen Kai; Tjur, Tue

    2005-01-01

    On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put on the interpretat......On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put...... on the interpretation of the parameters in relation to models for the total sales based on discrete choice models.Key words and phrases. MCI model, discrete choice model, market-shares, price elasitcity, regression model....

  11. Exponential B-splines and the partition of unity property

    DEFF Research Database (Denmark)

    Christensen, Ole; Massopust, Peter

    2012-01-01

    We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...

  12. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  13. Motion characteristic between die and workpiece in spline rolling process with round dies

    Directory of Open Access Journals (Sweden)

    Da-Wei Zhang

    2016-06-01

    Full Text Available In the spline rolling process with round dies, additional kinematic compensation is an essential mechanism for improving the division of teeth and pitch accuracy as well as surface quality. The motion characteristic between the die and workpiece under varied center distance in the spline rolling process was investigated. Mathematical models of the instantaneous center of rotation, transmission ratio, and centrodes in the rolling process were established. The models were used to analyze the rolling process of the involute spline with circular dedendum, and the results indicated that (1 with the reduction in the center distance, the instantaneous center moves toward workpiece, and the transmission ratio increases at first and then decreases; (2 the variations in the instantaneous center and transmission ratio are discontinuous, presenting an interruption when the involute flank begins to be formed; (3 the change in transmission ratio at the forming stage of the workpiece with the involute flank can be negligible; and (4 the centrode of the workpiece is an Archimedes line whose polar radius reduces, and the centrode of the rolling die is similar to Archimedes line when the workpiece is with the involute flank.

  14. A new class of interpolatory $L$-splines with adjoint end conditions

    OpenAIRE

    Bejancu, Aurelian; Al-Sahli, Reyouf S.

    2014-01-01

    A thin plate spline surface for interpolation of smooth transfinite data prescribed along concentric circles was recently proposed by Bejancu, using Kounchev's polyspline method. The construction of the new `Beppo Levi polyspline' surface reduces, via separation of variables, to that of a countable family of univariate $L$-splines, indexed by the frequency integer $k$. This paper establishes the existence, uniqueness and variational properties of the `Beppo Levi $L$-spline' schemes correspond...

  15. Spline models of contemporary, 2030, 2060, and 2090 climates for Mexico and their use in understanding climate-change impacts on the vegetation

    Science.gov (United States)

    Cuauhtemoc Saenz-Romero; Gerald E. Rehfeldt; Nicholas L. Crookston; Pierre Duval; Remi St-Amant; Jean Beaulieu; Bryce A. Richardson

    2010-01-01

    Spatial climate models were developed for Mexico and its periphery (southern USA, Cuba, Belize and Guatemala) for monthly normals (1961-1990) of average, maximum and minimum temperature and precipitation using thin plate smoothing splines of ANUSPLIN software on ca. 3,800 observations. The fit of the model was generally good: the signal was considerably less than one-...

  16. Nonlinear Multivariate Spline-Based Control Allocation for High-Performance Aircraft

    OpenAIRE

    Tol, H.J.; De Visser, C.C.; Van Kampen, E.; Chu, Q.P.

    2014-01-01

    High performance flight control systems based on the nonlinear dynamic inversion (NDI) principle require highly accurate models of aircraft aerodynamics. In general, the accuracy of the internal model determines to what degree the system nonlinearities can be canceled; the more accurate the model, the better the cancellation, and with that, the higher the performance of the controller. In this paper a new control system is presented that combines NDI with multivariate simplex spline based con...

  17. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    Science.gov (United States)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  18. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  19. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.; Martinez, Josue G.; Carroll, Raymond J.; Sorensen, Danny C.

    2010-01-01

    in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between

  20. B-spline tight frame based force matching method

    Science.gov (United States)

    Yang, Jianbin; Zhu, Guanhua; Tong, Dudu; Lu, Lanyuan; Shen, Zuowei

    2018-06-01

    In molecular dynamics simulations, compared with popular all-atom force field approaches, coarse-grained (CG) methods are frequently used for the rapid investigations of long time- and length-scale processes in many important biological and soft matter studies. The typical task in coarse-graining is to derive interaction force functions between different CG site types in terms of their distance, bond angle or dihedral angle. In this paper, an ℓ1-regularized least squares model is applied to form the force functions, which makes additional use of the B-spline wavelet frame transform in order to preserve the important features of force functions. The B-spline tight frames system has a simple explicit expression which is useful for representing our force functions. Moreover, the redundancy of the system offers more resilience to the effects of noise and is useful in the case of lossy data. Numerical results for molecular systems involving pairwise non-bonded, three and four-body bonded interactions are obtained to demonstrate the effectiveness of our approach.

  1. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  2. Modeling vector nonlinear time series using POLYMARS

    NARCIS (Netherlands)

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  3. Splines and their reciprocal-bases in volume-integral equations

    International Nuclear Information System (INIS)

    Sabbagh, H.A.

    1993-01-01

    The authors briefly outline the use of higher-order splines and their reciprocal-bases in discretizing the volume-integral equations of electromagnetics. The discretization is carried out by means of the method of moments, in which the expansion functions are the higher-order splines, and the testing functions are the corresponding reciprocal-basis functions. These functions satisfy an orthogonality condition with respect to the spline expansion functions. Thus, the method is not Galerkin, but the structure of the resulting equations is quite regular, nevertheless. The theory is applied to the volume-integral equations for the unknown current density, or unknown electric field, within a scattering body, and to the equations for eddy-current nondestructive evaluation. Numerical techniques for computing the matrix elements are also given

  4. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  5. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  6. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  7. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    Science.gov (United States)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  8. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  9. Final report on Production Test No. 105-245-P -- Effectiveness of cadmium coated splines

    Energy Technology Data Exchange (ETDEWEB)

    Carson, A.B.

    1949-05-19

    This report discussed cadmium coated splines which have been developed to supplement the regular control rod systems under emergency shutdown conditions from higher power levels. The objective of this test was to determine the effectiveness of one such spline placed in a tube in the central zone of a pile, and of two splines in the same tube. In addition, the process control group of the P Division asked that probable spline requirements for safe operation at various power levels be estimated, and the details included in this report. The results of the test indicated a reactivity value of 10.5 {plus_minus} 1.0 ih for a single spline, and 19.0 ih {plus_minus} 1.0 ihfor two splines in tube 1674-B under the loading conditions of 4-27-49, the date of the test. The temperature rise of the cooling water for this tube under these conditions was found to be 37.2{degrees}C for 275 MW operation.

  10. Space cutter compensation method for five-axis nonuniform rational basis spline machining

    Directory of Open Access Journals (Sweden)

    Yanyu Ding

    2015-07-01

    Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.

  11. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    Science.gov (United States)

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  13. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  14. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  15. B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Shilpa Dilipkumar

    2015-03-01

    Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.

  16. About the Modeling of Radio Source Time Series as Linear Splines

    Science.gov (United States)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  17. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  18. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  19. Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline

    Directory of Open Access Journals (Sweden)

    Ravi Kanth A.S.V.

    2016-01-01

    Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.

  20. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  1. Tomographic reconstruction with B-splines surfaces

    International Nuclear Information System (INIS)

    Oliveira, Eric F.; Dantas, Carlos C.; Melo, Silvio B.; Mota, Icaro V.; Lira, Mailson

    2011-01-01

    Algebraic reconstruction techniques when applied to a limited number of data usually suffer from noise caused by the process of correction or by inconsistencies in the data coming from the stochastic process of radioactive emission and oscillation equipment. The post - processing of the reconstructed image with the application of filters can be done to mitigate the presence of noise. In general these processes also attenuate the discontinuities present in edges that distinguish objects or artifacts, causing excessive blurring in the reconstructed image. This paper proposes a built-in noise reduction at the same time that it ensures adequate smoothness level in the reconstructed surface, representing the unknowns as linear combinations of elements of a piecewise polynomial basis, i.e. a B-splines basis. For that, the algebraic technique ART is modified to accommodate the first, second and third degree bases, ensuring C 0 , C 1 and C 2 smoothness levels, respectively. For comparisons, three methodologies are applied: ART, ART post-processed with regular B-splines filters (ART*) and the proposed method with the built-in B-splines filter (BsART). Simulations with input data produced from common mathematical phantoms were conducted. For the phantoms used the BsART method consistently presented the smallest errors, among the three methods. This study has shown the superiority of the change made to embed the filter in the ART when compared to the post-filtered ART. (author)

  2. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  3. An Additive-Multiplicative Cox-Aalen Regression Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...

  4. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  5. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  6. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  7. Adaptive estimation of multivariate functions using conditionally Gaussian tensor-product spline priors

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2012-01-01

    We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensor-product B-spline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the B-spline

  8. Series-NonUniform Rational B-Spline (S-NURBS) model: a geometrical interpolation framework for chaotic data.

    Science.gov (United States)

    Shao, Chenxi; Liu, Qingqing; Wang, Tingting; Yin, Peifeng; Wang, Binghong

    2013-09-01

    Time series is widely exploited to study the innate character of the complex chaotic system. Existing chaotic models are weak in modeling accuracy because of adopting either error minimization strategy or an acceptable error to end the modeling process. Instead, interpolation can be very useful for solving differential equations with a small modeling error, but it is also very difficult to deal with arbitrary-dimensional series. In this paper, geometric theory is considered to reduce the modeling error, and a high-precision framework called Series-NonUniform Rational B-Spline (S-NURBS) model is developed to deal with arbitrary-dimensional series. The capability of the interpolation framework is proved in the validation part. Besides, we verify its reliability by interpolating Musa dataset. The main improvement of the proposed framework is that we are able to reduce the interpolation error by properly adjusting weights series step by step if more information is given. Meanwhile, these experiments also demonstrate that studying the physical system from a geometric perspective is feasible.

  9. On convexity and Schoenberg's variation diminishing splines

    International Nuclear Information System (INIS)

    Feng, Yuyu; Kozak, J.

    1992-11-01

    In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs

  10. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  11. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  12. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  13. Scripted Bodies and Spline Driven Animation

    DEFF Research Database (Denmark)

    Erleben, Kenny; Henriksen, Knud

    2002-01-01

    In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...

  14. Numerical simulation of reaction-diffusion systems by modified cubic B-spline differential quadrature method

    International Nuclear Information System (INIS)

    Mittal, R.C.; Rohila, Rajni

    2016-01-01

    In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.

  15. Numerical solution of system of boundary value problems using B-spline with free parameter

    Science.gov (United States)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  16. Multinomial Response Models, for Modeling and Determining Important Factors in Different Contraceptive Methods in Women

    Directory of Open Access Journals (Sweden)

    E Haji Nejad

    2001-06-01

    Full Text Available Difference aspects of multinomial statistical modelings and its classifications has been studied so far. In these type of problems Y is the qualitative random variable with T possible states which are considered as classifications. The goal is prediction of Y based on a random Vector X ? IR^m. Many methods for analyzing these problems were considered. One of the modern and general method of classification is Classification and Regression Trees (CART. Another method is recursive partitioning techniques which has a strange relationship with nonparametric regression. Classical discriminant analysis is a standard method for analyzing these type of data. Flexible discriminant analysis method which is a combination of nonparametric regression and discriminant analysis and classification using spline that includes least square regression and additive cubic splines. Neural network is an advanced statistical method for analyzing these types of data. In this paper properties of multinomial logistics regression were investigated and this method was used for modeling effective factors in selecting contraceptive methods in Ghom province for married women age 15-49. The response variable has a tetranomial distibution. The levels of this variable are: nothing, pills, traditional and a collection of other contraceptive methods. A collection of significant independent variables were: place, age of women, education, history of pregnancy and family size. Menstruation age and age at marriage were not statistically significant.

  17. The Asymptotic Behavior of Particle Size Distribution Undergoing Brownian Coagulation Based on the Spline-Based Method and TEMOM Model

    Directory of Open Access Journals (Sweden)

    Qing He

    2018-01-01

    Full Text Available In this paper, the particle size distribution is reconstructed using finite moments based on a converted spline-based method, in which the number of linear system of equations to be solved reduced from 4m × 4m to (m + 3 × (m + 3 for (m + 1 nodes by using cubic spline compared to the original method. The results are verified by comparing with the reference firstly. Then coupling with the Taylor-series expansion moment method, the evolution of particle size distribution undergoing Brownian coagulation and its asymptotic behavior are investigated.

  18. Development of quadrilateral spline thin plate elements using the B-net method

    Science.gov (United States)

    Chen, Juan; Li, Chong-Jun

    2013-08-01

    The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.

  19. Landmark-based elastic registration using approximating thin-plate splines.

    Science.gov (United States)

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  20. 3D craniofacial registration using thin-plate spline transform and cylindrical surface projection.

    Science.gov (United States)

    Chen, Yucong; Zhao, Junli; Deng, Qingqiong; Duan, Fuqing

    2017-01-01

    Craniofacial registration is used to establish the point-to-point correspondence in a unified coordinate system among human craniofacial models. It is the foundation of craniofacial reconstruction and other craniofacial statistical analysis research. In this paper, a non-rigid 3D craniofacial registration method using thin-plate spline transform and cylindrical surface projection is proposed. First, the gradient descent optimization is utilized to improve a cylindrical surface fitting (CSF) for the reference craniofacial model. Second, the thin-plate spline transform (TPST) is applied to deform a target craniofacial model to the reference model. Finally, the cylindrical surface projection (CSP) is used to derive the point correspondence between the reference and deformed target models. To accelerate the procedure, the iterative closest point ICP algorithm is used to obtain a rough correspondence, which can provide a possible intersection area of the CSP. Finally, the inverse TPST is used to map the obtained corresponding points from the deformed target craniofacial model to the original model, and it can be realized directly by the correspondence between the original target model and the deformed target model. Three types of registration, namely, reflexive, involutive and transitive registration, are carried out to verify the effectiveness of the proposed craniofacial registration algorithm. Comparison with the methods in the literature shows that the proposed method is more accurate.

  1. 3D craniofacial registration using thin-plate spline transform and cylindrical surface projection.

    Directory of Open Access Journals (Sweden)

    Yucong Chen

    Full Text Available Craniofacial registration is used to establish the point-to-point correspondence in a unified coordinate system among human craniofacial models. It is the foundation of craniofacial reconstruction and other craniofacial statistical analysis research. In this paper, a non-rigid 3D craniofacial registration method using thin-plate spline transform and cylindrical surface projection is proposed. First, the gradient descent optimization is utilized to improve a cylindrical surface fitting (CSF for the reference craniofacial model. Second, the thin-plate spline transform (TPST is applied to deform a target craniofacial model to the reference model. Finally, the cylindrical surface projection (CSP is used to derive the point correspondence between the reference and deformed target models. To accelerate the procedure, the iterative closest point ICP algorithm is used to obtain a rough correspondence, which can provide a possible intersection area of the CSP. Finally, the inverse TPST is used to map the obtained corresponding points from the deformed target craniofacial model to the original model, and it can be realized directly by the correspondence between the original target model and the deformed target model. Three types of registration, namely, reflexive, involutive and transitive registration, are carried out to verify the effectiveness of the proposed craniofacial registration algorithm. Comparison with the methods in the literature shows that the proposed method is more accurate.

  2. TU-AB-202-07: A Novel Method for Registration of Mid-Treatment PET/CT Images Under Conditions of Tumor Regression for Patients with Locally Advanced Lung Cancers

    Energy Technology Data Exchange (ETDEWEB)

    Sharifi, Hoda [Department of Radiation Oncology, Henry Ford Health System, Detroit, MI (United States); Department of Physics, Oakland University, Rochester, MI (United States); Zhang, Hong; Jin, Jian-Yyue; Kong, Feng-Ming [Department of Radiation Oncology, GRU Cancer Center, Augusta GA (United States); Chetty, Indrin J [Department of Radiation Oncology, Henry Ford Health System, Detroit, MI (United States); Zhong, Hualiang

    2016-06-15

    Purpose: In PET-guided adaptive radiotherapy (RT), changes in the metabolic activity at individual voxels cannot be derived until the duringtreatment CT images are appropriately registered to pre-treatment CT images. However, deformable image registration (DIR) usually does not preserve tumor volume. This may induce errors when comparing to the target. The aim of this study was to develop a DIR-integrated mechanical modeling technique to track radiation-induced metabolic changes on PET images. Methods: Three patients with non-small cell lung cancer (NSCLC) were treated with adaptive radiotherapy under RTOG 1106. Two PET/CT image sets were acquired 2 weeks before RT and 18 fractions after the start of treatment. DIR was performed to register the during-RT CT to the pre-RT CT using a B-spline algorithm and the resultant displacements in the region of tumor were remodeled using a hybrid finite element method (FEM). Gross tumor volume (GTV) was delineated on the during-RT PET/CT image sets and deformed using the 3D deformation vector fields generated by the CT-based registrations. Metabolic tumor volume (MTV) was calculated using the pre- and during–RT image set. The quality of the PET mapping was evaluated based on the constancy of the mapped MTV and landmark comparison. Results: The B-spline-based registrations changed MTVs by 7.3%, 4.6% and −5.9% for the 3 patients and the correspondent changes for the hybrid FEM method −2.9%, 1% and 6.3%, respectively. Landmark comparisons were used to evaluate the Rigid, B-Spline, and hybrid FEM registrations with the mean errors of 10.1 ± 1.6 mm, 4.4 ± 0.4 mm, and 3.6 ± 0.4 mm for three patients. The hybrid FEM method outperforms the B-Spline-only registration for patients with tumor regression Conclusion: The hybrid FEM modeling technique improves the B-Spline registrations in tumor regions. This technique may help compare metabolic activities between two PET/CT images with regressing tumors. The author gratefully

  3. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  5. MHD stability analysis using higher order spline functions

    Energy Technology Data Exchange (ETDEWEB)

    Ida, Akihiro [Department of Energy Engineering and Science, Graduate School of Engineering, Nagoya University, Nagoya, Aichi (Japan); Todoroki, Jiro; Sanuki, Heiji

    1999-04-01

    The eigenvalue problem of the linearized magnetohydrodynamic (MHD) equation is formulated by using higher order spline functions as the base functions of Ritz-Galerkin approximation. When the displacement vector normal to the magnetic surface (in the magnetic surface) is interpolated by B-spline functions of degree p{sub 1} (degree p{sub 2}), which is continuously c{sub 1}-th (c{sub 2}-th) differentiable on neighboring finite elements, the sufficient conditions for the good approximation is given by p{sub 1}{>=}p{sub 2}+1, c{sub 1}{<=}c{sub 2}+1, (c{sub 1}{>=}1, p{sub 2}{>=}c{sub 2}{>=}0). The influence of the numerical integration upon the convergence of calculated eigenvalues is discussed. (author)

  6. Splines and polynomial tools for flatness-based constrained motion planning

    Science.gov (United States)

    Suryawan, Fajar; De Doná, José; Seron, María

    2012-08-01

    This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.

  7. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    Science.gov (United States)

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Correlation studies for B-spline modeled F2 Chapman parameters obtained from FORMOSAT-3/COSMIC data

    Directory of Open Access Journals (Sweden)

    M. Limberger

    2014-12-01

    Full Text Available The determination of ionospheric key quantities such as the maximum electron density of the F2 layer NmF2, the corresponding F2 peak height hmF2 and the F2 scale height HF2 are of high relevance in 4-D ionosphere modeling to provide information on the vertical structure of the electron density (Ne. The Ne distribution with respect to height can, for instance, be modeled by the commonly accepted F2 Chapman layer. An adequate and observation driven description of the vertical Ne variation can be obtained from electron density profiles (EDPs derived by ionospheric radio occultation measurements between GPS and low Earth orbiter (LEO satellites. For these purposes, the six FORMOSAT-3/COSMIC (F3/C satellites provide an excellent opportunity to collect EDPs that cover most of the ionospheric region, in particular the F2 layer. For the contents of this paper, F3/C EDPs have been exploited to determine NmF2, hmF2 and HF2 within a regional modeling approach. As mathematical base functions, endpoint-interpolating polynomial B-splines are considered to model the key parameters with respect to longitude, latitude and time. The description of deterministic processes and the verification of this modeling approach have been published previously in Limberger et al. (2013, whereas this paper should be considered as an extension dealing with related correlation studies, a topic to which less attention has been paid in the literature. Relations between the B-spline series coefficients regarding specific key parameters as well as dependencies between the three F2 Chapman key parameters are in the main focus. Dependencies are interpreted from the post-derived correlation matrices as a result of (1 a simulated scenario without data gaps by taking dense, homogenously distributed profiles into account and (2 two real data scenarios on 1 July 2008 and 1 July 2012 including sparsely, inhomogeneously distributed F3/C EDPs. Moderate correlations between hmF2 and HF2 as

  9. Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities

    Directory of Open Access Journals (Sweden)

    Rakowski Waldemar

    2015-12-01

    Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.

  10. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  11. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  12. On-line quantile regression in the RKHS (Reproducing Kernel Hilbert Space) for operational probabilistic forecasting of wind power

    International Nuclear Information System (INIS)

    Gallego-Castillo, Cristobal; Bessa, Ricardo; Cavalcante, Laura; Lopez-Garcia, Oscar

    2016-01-01

    Wind power probabilistic forecast is being used as input in several decision-making problems, such as stochastic unit commitment, operating reserve setting and electricity market bidding. This work introduces a new on-line quantile regression model based on the Reproducing Kernel Hilbert Space (RKHS) framework. Its application to the field of wind power forecasting involves a discussion on the choice of the bias term of the quantile models, and the consideration of the operational framework in order to mimic real conditions. Benchmark against linear and splines quantile regression models was performed for a real case study during a 18 months period. Model parameter selection was based on k-fold crossvalidation. Results showed a noticeable improvement in terms of calibration, a key criterion for the wind power industry. Modest improvements in terms of Continuous Ranked Probability Score (CRPS) were also observed for prediction horizons between 6 and 20 h ahead. - Highlights: • New online quantile regression model based on the Reproducing Kernel Hilbert Space. • First application to operational probabilistic wind power forecasting. • Modest improvements of CRPS for prediction horizons between 6 and 20 h ahead. • Noticeable improvements in terms of Calibration due to online learning.

  13. Real estate value prediction using multivariate regression models

    Science.gov (United States)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  14. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  15. Data approximation using a blending type spline construction

    International Nuclear Information System (INIS)

    Dalmo, Rune; Bratlie, Jostein

    2014-01-01

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C k -smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences

  16. Finite nucleus Dirac mean field theory and random phase approximation using finite B splines

    International Nuclear Information System (INIS)

    McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)

    1989-01-01

    We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results

  17. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  18. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  19. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  20. Spline-based automatic path generation of welding robot

    Institute of Scientific and Technical Information of China (English)

    Niu Xuejuan; Li Liangyu

    2007-01-01

    This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.

  1. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  2. On the accurate fast evaluation of finite Fourier integrals using cubic splines

    International Nuclear Information System (INIS)

    Morishima, N.

    1993-01-01

    Finite Fourier integrals based on a cubic-splines fit to equidistant data are shown to be evaluated fast and accurately. Good performance, especially on computational speed, is achieved by the optimization of the spline fit and the internal use of the fast Fourier transform (FFT) algorithm for complex data. The present procedure provides high accuracy with much shorter CPU time than a trapezoidal FFT. (author)

  3. Regression Models For Multivariate Count Data.

    Science.gov (United States)

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  4. Nonlinear Analysis for the Crack Control of SMA Smart Concrete Beam Based on a Bidirectional B-Spline QR Method

    Directory of Open Access Journals (Sweden)

    Yan Li

    2018-01-01

    Full Text Available A bidirectional B-spline QR method (BB-sQRM for the study on the crack control of the reinforced concrete (RC beam embedded with shape memory alloy (SMA wires is presented. In the proposed method, the discretization is performed with a set of spline nodes in two directions of the plane model, and structural displacement fields are constructed by the linear combination of the products of cubic B-spline interpolation functions. To derive the elastoplastic stiffness equation of the RC beam, an explicit form is utilized to express the elastoplastic constitutive law of concrete materials. The proposed model is compared with the ANSYS model in several numerical examples. The results not only show that the solutions given by the BB-sQRM are very close to those given by the finite element method (FEM but also prove the high efficiency and low computational cost of the BB-sQRM. Meanwhile, the five parameters, such as depth-span ratio, thickness of concrete cover, reinforcement ratio, prestrain, and eccentricity of SMA wires, are investigated to learn their effects on the crack control. The results show that depth-span ratio of RC beams and prestrain and eccentricity of SMA wires have a significant influence on the control performance of beam cracks.

  5. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  6. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  7. Analytic regularization of uniform cubic B-spline deformation fields.

    Science.gov (United States)

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  8. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  9. B-spline solution of a singularly perturbed boundary value problem arising in biology

    International Nuclear Information System (INIS)

    Lin Bin; Li Kaitai; Cheng Zhengxing

    2009-01-01

    We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.

  10. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  11. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  12. An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry

    Science.gov (United States)

    2015-12-01

    ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  13. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  14. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  15. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  16. B-Spline Approximations of the Gaussian, their Gabor Frame Properties, and Approximately Dual Frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2017-01-01

    We prove that Gabor systems generated by certain scaled B-splines can be considered as perturbations of the Gabor systems generated by the Gaussian, with a deviation within an arbitrary small tolerance whenever the order N of the B-spline is sufficiently large. As a consequence we show that for a...

  17. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    Science.gov (United States)

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  18. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  19. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  20. Testing homogeneity in Weibull-regression models.

    Science.gov (United States)

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  1. Implementation of exterior complex scaling in B-splines to solve atomic and molecular collision problems

    International Nuclear Information System (INIS)

    McCurdy, C William; MartIn, Fernando

    2004-01-01

    B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling

  2. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia; Rue, Haavard

    2018-01-01

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite

  3. Detection of epistatic effects with logic regression and a classical linear regression model.

    Science.gov (United States)

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  4. Thin-plate spline analysis of mandibular growth.

    Science.gov (United States)

    Franchi, L; Baccetti, T; McNamara, J A

    2001-04-01

    The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.

  5. Prostate multimodality image registration based on B-splines and quadrature local energy.

    Science.gov (United States)

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  6. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  7. AN APPLICATION OF FUNCTIONAL MULTIVARIATE REGRESSION MODEL TO MULTICLASS CLASSIFICATION

    OpenAIRE

    Krzyśko, Mirosław; Smaga, Łukasz

    2017-01-01

    In this paper, the scale response functional multivariate regression model is considered. By using the basis functions representation of functional predictors and regression coefficients, this model is rewritten as a multivariate regression model. This representation of the functional multivariate regression model is used for multiclass classification for multivariate functional data. Computational experiments performed on real labelled data sets demonstrate the effectiveness of the proposed ...

  8. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  9. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    Science.gov (United States)

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  10. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    Science.gov (United States)

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  11. Spline methods for conversation equations

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs

  12. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    Science.gov (United States)

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  13. Alternative regression models to assess increase in childhood BMI.

    Science.gov (United States)

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-09-08

    Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  14. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    roots in meeting the evident need for improved estimators in spatial interpolation. Technical advances in regression analysis during the 1970s embraced the development of regression diagnostics and consequent attention to outliers; the recognition of problems caused by correlated predictors, and the subsequent introduction of ridge regression to overcome them; and techniques for fitting errors-in-variables and mixture models. Improvements in computational power have enabled ever more computer-intensive methods to be applied. These include algorithms which are robust in the presence of outliers, for example Rousseeuw's 1984 Least Median Squares; nonparametric smoothing methods, such as kernel-functions, splines and Cleveland's 1979 LOcally WEighted Scatterplot Smoother (LOWESS); and the Classification and Regression Tree (CART) technique of Breiman and others in 1984. Despite a continuing improvement in the rate of technology-transfer from the statistical to the earth-science community, despite an abrupt drop to a time-lag of about 10 years following the introduction of digital computers, these more recent developments are only just beginning to penetrate beyond the research community of earth scientists. Examples of applications to problem-solving in the earth sciences are given

  15. Alternative regression models to assess increase in childhood BMI

    OpenAIRE

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-01-01

    Abstract Background Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 childre...

  16. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    International Nuclear Information System (INIS)

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-01-01

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data

  17. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  18. Digital elevation model production from scanned topographic contour maps via thin plate spline interpolation

    International Nuclear Information System (INIS)

    Soycan, Arzu; Soycan, Metin

    2009-01-01

    GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)

  19. Influence of smoothing of X-ray spectra on parameters of calibration model

    International Nuclear Information System (INIS)

    Antoniak, W.; Urbanski, P.; Kowalska, E.

    1998-01-01

    Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)

  20. Spline smoothing of histograms by linear programming

    Science.gov (United States)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  1. Random regression models for detection of gene by environment interaction

    Directory of Open Access Journals (Sweden)

    Meuwissen Theo HE

    2007-02-01

    Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.

  2. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    preferred which preserves some of the characteristics of the function to be interpolated. In order to tackle such ... Shape preserving properties of the rational (cubic/quadratic) spline interpolant have been studied ... tension parameters which is used to interpolate the given monotonic data is described in. [6]. Shape preserving ...

  3. Alternative regression models to assess increase in childhood BMI

    Directory of Open Access Journals (Sweden)

    Mansmann Ulrich

    2008-09-01

    Full Text Available Abstract Background Body mass index (BMI data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs, quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS. We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. Results GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. Conclusion GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  4. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  5. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  6. Wavelet regression model in forecasting crude oil price

    Science.gov (United States)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  7. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    Science.gov (United States)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  8. Spline function fit for multi-sets of correlative data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhou Hongmo

    1992-01-01

    The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation

  9. A fractional spline collocation-Galerkin method for the time-fractional diffusion equation

    Directory of Open Access Journals (Sweden)

    Pezza L.

    2018-03-01

    Full Text Available The aim of this paper is to numerically solve a diffusion differential problem having time derivative of fractional order. To this end we propose a collocation-Galerkin method that uses the fractional splines as approximating functions. The main advantage is in that the derivatives of integer and fractional order of the fractional splines can be expressed in a closed form that involves just the generalized finite difference operator. This allows us to construct an accurate and efficient numerical method. Several numerical tests showing the effectiveness of the proposed method are presented.

  10. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  11. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  12. Spatial stochastic regression modelling of urban land use

    International Nuclear Information System (INIS)

    Arshad, S H M; Jaafar, J; Abiden, M Z Z; Latif, Z A; Rasam, A R A

    2014-01-01

    Urbanization is very closely linked to industrialization, commercialization or overall economic growth and development. This results in innumerable benefits of the quantity and quality of the urban environment and lifestyle but on the other hand contributes to unbounded development, urban sprawl, overcrowding and decreasing standard of living. Regulation and observation of urban development activities is crucial. The understanding of urban systems that promotes urban growth are also essential for the purpose of policy making, formulating development strategies as well as development plan preparation. This study aims to compare two different stochastic regression modeling techniques for spatial structure models of urban growth in the same specific study area. Both techniques will utilize the same datasets and their results will be analyzed. The work starts by producing an urban growth model by using stochastic regression modeling techniques namely the Ordinary Least Square (OLS) and Geographically Weighted Regression (GWR). The two techniques are compared to and it is found that, GWR seems to be a more significant stochastic regression model compared to OLS, it gives a smaller AICc (Akaike's Information Corrected Criterion) value and its output is more spatially explainable

  13. Adaptive regression modeling of biomarkers of potential harm in a population of U.S. adult cigarette smokers and nonsmokers

    Directory of Open Access Journals (Sweden)

    Mendes Paul E

    2010-03-01

    Full Text Available Abstract Background This article describes the data mining analysis of a clinical exposure study of 3585 adult smokers and 1077 nonsmokers. The analysis focused on developing models for four biomarkers of potential harm (BOPH: white blood cell count (WBC, 24 h urine 8-epi-prostaglandin F2α (EPI8, 24 h urine 11-dehydro-thromboxane B2 (DEH11, and high-density lipoprotein cholesterol (HDL. Methods Random Forest was used for initial variable selection and Multivariate Adaptive Regression Spline was used for developing the final statistical models Results The analysis resulted in the generation of models that predict each of the BOPH as function of selected variables from the smokers and nonsmokers. The statistically significant variables in the models were: platelet count, hemoglobin, C-reactive protein, triglycerides, race and biomarkers of exposure to cigarette smoke for WBC (R-squared = 0.29; creatinine clearance, liver enzymes, weight, vitamin use and biomarkers of exposure for EPI8 (R-squared = 0.41; creatinine clearance, urine creatinine excretion, liver enzymes, use of Non-steroidal antiinflammatory drugs, vitamins and biomarkers of exposure for DEH11 (R-squared = 0.29; and triglycerides, weight, age, sex, alcohol consumption and biomarkers of exposure for HDL (R-squared = 0.39. Conclusions Levels of WBC, EPI8, DEH11 and HDL were statistically associated with biomarkers of exposure to cigarette smoking and demographics and life style factors. All of the predictors togather explain 29%-41% of the variability in the BOPH.

  14. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    Science.gov (United States)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  15. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  16. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  17. The basis spline method and associated techniques

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs

  18. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  19. Iteratively re-weighted bi-cubic spline representation of corneal topography and its comparison to the standard methods.

    Science.gov (United States)

    Zhu, Zhongxia; Janunts, Edgar; Eppig, Timo; Sauer, Tomas; Langenbucher, Achim

    2010-01-01

    The aim of this study is to represent the corneal anterior surface by utilizing radius and height data extracted from a TMS-2N topographic system with three different mathematical approaches and to simulate the visual performance. An iteratively re-weighted bi-cubic spline method is introduced for the local representation of the corneal surface. For comparison, two standard mathematical global representation approaches are used: the general quadratic function and the higher order Taylor polynomial approach. First, these methods were applied in simulations using three corneal models. Then, two real eye examples were investigated: one eye with regular astigmatism, and one eye which had undergone refractive surgery. A ray-tracing program was developed to evaluate the imaging performance of these examples with each surface representation strategy at the best focus plane. A 6 mm pupil size was chosen for the simulation. The fitting error (deviation) of the presented methods was compared. It was found that the accuracy of the topography representation was worst using the quadratic function and best with bicubic spline. The quadratic function cannot precisely describe the irregular corneal shape. In order to achieve a sub-micron fitting precision, the Taylor polynomial's order selection behaves adaptive to the corneal shape. The bi-cubic spline shows more stable performance. Considering the visual performance, the more precise the cornea representation is, the worse the visual performance is. The re-weighted bi-cubic spline method is a reasonable and stable method for representing the anterior corneal surface in measurements using a Placido-ring-pattern-based corneal topographer. Copyright © 2010. Published by Elsevier GmbH.

  20. 2-Dimensional B-Spline Algorithms with Applications to Ray Tracing in Media of Spatially-Varying Refractive Index

    Science.gov (United States)

    2007-08-01

    In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods) rather than linear trajectories. The...coupling the radiative transport solution into heat transfer and damage models. 15. SUBJECT TERMS: B-Splines, Ray-Tracing, Eikonal Equation...multi-layer biological tissue model. In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods

  1. A Bayesian-optimized spline representation of the electrocardiogram

    International Nuclear Information System (INIS)

    Guilak, F G; McNames, J

    2013-01-01

    We introduce an implementation of a novel spline framework for parametrically representing electrocardiogram (ECG) waveforms. This implementation enables a flexible means to study ECG structure in large databases. Our algorithm allows researchers to identify key points in the waveform and optimally locate them in long-term recordings with minimal manual effort, thereby permitting analysis of trends in the points themselves or in metrics derived from their locations. In the work described here we estimate the location of a number of commonly-used characteristic points of the ECG signal, defined as the onsets, peaks, and offsets of the P, QRS, T, and R′ waves. The algorithm applies Bayesian optimization to a linear spline representation of the ECG waveform. The location of the knots—which are the endpoints of the piecewise linear segments used in the spline representation of the signal—serve as the estimate of the waveform’s characteristic points. We obtained prior information of knot times, amplitudes, and curvature from a large manually-annotated training dataset and used the priors to optimize a Bayesian figure of merit based on estimated knot locations. In cases where morphologies vary or are subject to noise, the algorithm relies more heavily on the estimated priors for its estimate of knot locations. We compared optimized knot locations from our algorithm to two sets of manual annotations on a prospective test data set comprising 200 beats from 20 subjects not in the training set. Mean errors of characteristic point locations were less than four milliseconds, and standard deviations of errors compared favorably against reference values. This framework can easily be adapted to include additional points of interest in the ECG signal or for other biomedical detection problems on quasi-periodic signals. (paper)

  2. Geographically weighted regression model on poverty indicator

    Science.gov (United States)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  3. Computer simulation comparison of tripolar, bipolar, and spline Laplacian electrocadiogram estimators.

    Science.gov (United States)

    Chen, T; Besio, W; Dai, W

    2009-01-01

    A comparison of the performance of the tripolar and bipolar concentric as well as spline Laplacian electrocardiograms (LECGs) and body surface Laplacian mappings (BSLMs) for localizing and imaging the cardiac electrical activation has been investigated based on computer simulation. In the simulation a simplified eccentric heart-torso sphere-cylinder homogeneous volume conductor model were developed. Multiple dipoles with different orientations were used to simulate the underlying cardiac electrical activities. Results show that the tripolar concentric ring electrodes produce the most accurate LECG and BSLM estimation among the three estimators with the best performance in spatial resolution.

  4. Splines under tension for gridding three-dimensional data

    International Nuclear Information System (INIS)

    Brand, H.R.; Frazer, J.W.

    1982-01-01

    By use of the splines-under-tension concept, a simple algorithm has been developed for the three-dimensional representation of nonuniformly spaced data. The representations provide useful information to the experimentalist when he is attempting to understand the results obtained in a self-adaptive experiment. The shortcomings of the algorithm are discussed as well as the advantages

  5. Effects of Tightening Torque on Dynamic Characteristics of Low Pressure Rotors Connected by a Spline Coupling

    Institute of Scientific and Technical Information of China (English)

    Chen Xi; Liao M ingfu; Li Quankun

    2017-01-01

    A rotor dynamic model is built up for investigating the effects of tightening torque on dynamic character-istics of low pressure rotors connected by a spline coupling .The experimental rotor system is established using a fluted disk and a speed sensor which is applied in an actual aero engine for speed measurement .Through simulating calculation and experiments ,the effects of tightening torque on the dynamic characteristics of the rotor system con-nected by a spline coupling including critical speeds ,vibration modes and unbalance responses are analyzed .The results show that when increasing the tightening torque ,the first two critical speeds and the amplitudes of unbal-ance response gradually increase in varying degrees while the vibration modes are essentially unchanged .In addi-tion ,changing axial and circumferential positions of the mass unbalance can lead to various amplitudes of unbalance response and even the rates of change .

  6. Effects of early activator treatment in patients with class II malocclusion evaluated by thin-plate spline analysis.

    Science.gov (United States)

    Lux, C J; Rübel, J; Starke, J; Conradt, C; Stellzig, P A; Komposch, P G

    2001-04-01

    The aim of the present longitudinal cephalometric study was to evaluate the dentofacial shape changes induced by activator treatment between 9.5 and 11.5 years in male Class II patients. For a rigorous morphometric analysis, a thin-plate spline analysis was performed to assess and visualize dental and skeletal craniofacial changes. Twenty male patients with a skeletal Class II malrelationship and increased overjet who had been treated at the University of Heidelberg with a modified Andresen-Häupl-type activator were compared with a control group of 15 untreated male subjects of the Belfast Growth Study. The shape changes for each group were visualized on thin-plate splines with one spline comprising all 13 landmarks to show all the craniofacial shape changes, including skeletal and dento-alveolar reactions, and a second spline based on 7 landmarks to visualize only the skeletal changes. In the activator group, the grid deformation of the total spline pointed to a strong activator-induced reduction of the overjet that was caused both by a tipping of the incisors and by a moderation of sagittal discrepancies, particularly a slight advancement of the mandible. In contrast with this, in the control group, only slight localized shape changes could be detected. Both in the 7- and 13-landmark configurations, the shape changes between the groups differed significantly at P thin-plate spline analysis turned out to be a useful morphometric supplement to conventional cephalometrics because the complex patterns of shape change could be suggestively visualized.

  7. SPLINE-FUNCTIONS IN THE TASK OF THE FLOW AIRFOIL PROFILE

    Directory of Open Access Journals (Sweden)

    Mikhail Lopatjuk

    2013-12-01

    Full Text Available The method and the algorithm of solving the problem of streamlining are presented. Neumann boundary problem is reduced to the solution of integral equations with given boundary conditions using the cubic spline-functions

  8. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  9. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  10. Quintic hyperbolic nonpolynomial spline and finite difference method for nonlinear second order differential equations and its application

    Directory of Open Access Journals (Sweden)

    Navnit Jha

    2014-04-01

    Full Text Available An efficient numerical method based on quintic nonpolynomial spline basis and high order finite difference approximations has been presented. The scheme deals with the space containing hyperbolic and polynomial functions as spline basis. With the help of spline functions we derive consistency conditions and high order discretizations of the differential equation with the significant first order derivative. The error analysis of the new method is discussed briefly. The new method is analyzed for its efficiency using the physical problems. The order and accuracy of the proposed method have been analyzed in terms of maximum errors and root mean square errors.

  11. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  12. Short-term electricity prices forecasting based on support vector regression and Auto-regressive integrated moving average modeling

    International Nuclear Information System (INIS)

    Che Jinxing; Wang Jianzhou

    2010-01-01

    In this paper, we present the use of different mathematical models to forecast electricity price under deregulated power. A successful prediction tool of electricity price can help both power producers and consumers plan their bidding strategies. Inspired by that the support vector regression (SVR) model, with the ε-insensitive loss function, admits of the residual within the boundary values of ε-tube, we propose a hybrid model that combines both SVR and Auto-regressive integrated moving average (ARIMA) models to take advantage of the unique strength of SVR and ARIMA models in nonlinear and linear modeling, which is called SVRARIMA. A nonlinear analysis of the time-series indicates the convenience of nonlinear modeling, the SVR is applied to capture the nonlinear patterns. ARIMA models have been successfully applied in solving the residuals regression estimation problems. The experimental results demonstrate that the model proposed outperforms the existing neural-network approaches, the traditional ARIMA models and other hybrid models based on the root mean square error and mean absolute percentage error.

  13. Application of multivariate splines to discrete mathematics

    OpenAIRE

    Xu, Zhiqiang

    2005-01-01

    Using methods developed in multivariate splines, we present an explicit formula for discrete truncated powers, which are defined as the number of non-negative integer solutions of linear Diophantine equations. We further use the formula to study some classical problems in discrete mathematics as follows. First, we extend the partition function of integers in number theory. Second, we exploit the relation between the relative volume of convex polytopes and multivariate truncated powers and giv...

  14. Modeling oil production based on symbolic regression

    International Nuclear Information System (INIS)

    Yang, Guangfei; Li, Xianneng; Wang, Jianliang; Lian, Lian; Ma, Tieju

    2015-01-01

    Numerous models have been proposed to forecast the future trends of oil production and almost all of them are based on some predefined assumptions with various uncertainties. In this study, we propose a novel data-driven approach that uses symbolic regression to model oil production. We validate our approach on both synthetic and real data, and the results prove that symbolic regression could effectively identify the true models beneath the oil production data and also make reliable predictions. Symbolic regression indicates that world oil production will peak in 2021, which broadly agrees with other techniques used by researchers. Our results also show that the rate of decline after the peak is almost half the rate of increase before the peak, and it takes nearly 12 years to drop 4% from the peak. These predictions are more optimistic than those in several other reports, and the smoother decline will provide the world, especially the developing countries, with more time to orchestrate mitigation plans. -- Highlights: •A data-driven approach has been shown to be effective at modeling the oil production. •The Hubbert model could be discovered automatically from data. •The peak of world oil production is predicted to appear in 2021. •The decline rate after peak is half of the increase rate before peak. •Oil production projected to decline 4% post-peak

  15. ShapeSelectForest: a new r package for modeling landsat time series

    Science.gov (United States)

    Mary Meyer; Xiyue Liao; Gretchen Moisen; Elizabeth Freeman

    2015-01-01

    We present a new R package called ShapeSelectForest recently posted to the Comprehensive R Archival Network. The package was developed to fit nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral...

  16. Usando splines cúbicas na modelagem matemática da evolução populacional de Pirapora/MG

    Directory of Open Access Journals (Sweden)

    José Sérgio Domingues

    2014-08-01

    Full Text Available O objetivo desse trabalho é obter um modelo matemático para a evolução populacional da cidade de Pirapora/MG, baseando-se apenas nos dados de censos e contagens populacionais do Instituto Brasileiro de Geografia e Estatística (IBGE. Para isso, é utilizada a interpolação por splines cúbicas, pois as técnicas de interpolação linear e polinomial, e também o modelo logístico, não se ajustam bem a essa população. Os dados analisados não são equidistantes, então, utiliza-se como amostra anos separados com passo h de 10 anos. Os valores descartados inicialmente e as estimativas populacionais para esse município, descritos pela Fundação João Pinheiro, serviram para validação do modelo construído, e para a estimativa das diferenças percentuais de previsão, que não ultrapassaram os 2,21%. Ao se considerar que o padrão de evolução populacional de 2000 a 2010 se manterá até 2020, estima-se as populações da cidade de 2011 a 2020, cuja diferença percentual média foi de apenas 0,49%. Conclui-se que o modelo se ajusta muito bem aos dados, e que estimativas populacionais em qualquer ano de 1970 e 2020 são confiáveis. Além disso, o modelo permite a visualização prática de uma aplicação dessa técnica na modelagem populacional, e, portanto, também pode ser utilizada para fins didáticos.Palavras-chave: Splines cúbicas. Interpolação. Modelagem matemática. Evolução populacional. Pirapora.Using cubic splines on mathematical modeling of the population evolution of Pirapora/MGThe main objective of this paper is to obtain a mathematical model for the evolution of the population in Pirapora/MG, based only on data from censuses and population counts from Brazilian Institute of Geography and Statistics (IBGE. For this, the cubic spline interpolation is used because the technique of linear and polynomial interpolation, and also the logistic model do not fit well with this population. The analyzed data are not equidistant

  17. Comparison of four mathematical models for the calculation of radioimmunoassay data of LH, FSH and GH

    International Nuclear Information System (INIS)

    Geier, T.; Rohde, W.

    1981-01-01

    Weighted linear logit-log regression, point-to-point logit-log interpolation, smoothing spline approximation and the four-parameter logistic function calculated by non-linear regression have been compared. The data for comparison have been obtained from two different pool-sera for each of the LH-, FSH- and GH-RIA and from the basal serum LH values of two populations of children. The Wilcoxon matched pairs signed rank test was used for comparison: For GH there is no significant difference between all methods, for FSH the weighted linear logit-log regression and spline approximation appeared to be equivalent, but for LH no unequivocal assertion can be made. There is no significant difference between the mathematical models for determination of hormone concentration within one assay run of a population as exemplified for LH. In addition, pool sera data were subjected to an analysis of variance and the comparison of the results revealed that the different models did not lead to different statements about assay performance. The point-to-point logit-log interpolation is proposed as most simple curvilinear approximation for assays which cannot be linearized by logit-log transformation. (author)

  18. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  19. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  20. The APT model as reduced-rank regression

    NARCIS (Netherlands)

    Bekker, P.A.; Dobbelstein, P.; Wansbeek, T.J.

    Integrating the two steps of an arbitrage pricing theory (APT) model leads to a reduced-rank regression (RRR) model. So the results on RRR can be used to estimate APT models, making estimation very simple. We give a succinct derivation of estimation of RRR, derive the asymptotic variance of RRR

  1. Study on signal processing in Eddy current testing for defects in spline gear

    International Nuclear Information System (INIS)

    Lee, Jae Ho; Park, Tae Sug; Park, Ik Keun

    2016-01-01

    Eddy current testing (ECT) is commonly applied for the inspection of automated production lines of metallic products, because it has a high inspection speed and a reasonable price. When ECT is applied for the inspection of a metallic object having an uneven target surface, such as the spline gear of a spline shaft, it is difficult to distinguish between the original signal obtained from the sensor and the signal generated by a defect because of the relatively large surface signals having similar frequency distributions. To facilitate the detection of defect signals from the spline gear, implementation of high-order filters is essential, so that the fault signals can be distinguished from the surrounding noise signals, and simultaneously, the pass-band of the filter can be adjusted according to the status of each production line and the object to be inspected. We will examine the infinite impulse filters (IIR filters) available for implementing an advanced filter for ECT, and attempt to detect the flaw signals through optimization of system design parameters for detecting the signals at the system level

  2. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... The research identified logistic regression as a powerful tool for analysis of DMSMS and further developed twenty models attempting to identify the "best" way to model and predict DMSMS using logistic regression...

  4. A thin-plate spline analysis of the face and tongue in obstructive sleep apnea patients.

    Science.gov (United States)

    Pae, E K; Lowe, A A; Fleetham, J A

    1997-12-01

    The shape characteristics of the face and tongue in obstructive sleep apnea (OSA) patients were investigated using thin-plate (TP) splines. A relatively new analytic tool, the TP spline method, provides a means of size normalization and image analysis. When shape is one's main concern, various sizes of a biologic structure may be a source of statistical noise. More seriously, the strong size effect could mask underlying, actual attributes of the disease. A set of size normalized data in the form of coordinates was generated from cephalograms of 80 male subjects. The TP spline method envisioned the differences in the shape of the face and tongue between OSA patients and nonapneic subjects and those between the upright and supine body positions. In accordance with OSA severity, the hyoid bone and the submental region positioned inferiorly and the fourth vertebra relocated posteriorly with respect to the mandible. This caused a fanlike configuration of the lower part of the face and neck in the sagittal plane in both upright and supine body positions. TP splines revealed tongue deformations caused by a body position change. Overall, the new morphometric tool adopted here was found to be viable in the analysis of morphologic changes.

  5. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  6. Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data

    Science.gov (United States)

    Luo, Zhen

    In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.

  7. AIRLINE ACTIVITY FORECASTING BY REGRESSION MODELS

    Directory of Open Access Journals (Sweden)

    Н. Білак

    2012-04-01

    Full Text Available Proposed linear and nonlinear regression models, which take into account the equation of trend and seasonality indices for the analysis and restore the volume of passenger traffic over the past period of time and its prediction for future years, as well as the algorithm of formation of these models based on statistical analysis over the years. The desired model is the first step for the synthesis of more complex models, which will enable forecasting of passenger (income level airline with the highest accuracy and time urgency.

  8. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  9. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  10. Application of multivariate adaptive regression spine-assisted objective function on optimization of heat transfer rate around a cylinder

    Energy Technology Data Exchange (ETDEWEB)

    Dey, Prasenjit; Dad, Ajoy K. [Mechanical Engineering Department, National Institute of Technology, Agartala (India)

    2016-12-15

    The present study aims to predict the heat transfer characteristics around a square cylinder with different corner radii using multivariate adaptive regression splines (MARS). Further, the MARS-generated objective function is optimized by particle swarm optimization. The data for the prediction are taken from the recently published article by the present authors [P. Dey, A. Sarkar, A.K. Das, Development of GEP and ANN model to predict the unsteady forced convection over a cylinder, Neural Comput. Appl. (2015). Further, the MARS model is compared with artificial neural network and gene expression programming. It has been found that the MARS model is very efficient in predicting the heat transfer characteristics. It has also been found that MARS is more efficient than artificial neural network and gene expression programming in predicting the forced convection data, and also particle swarm optimization can efficiently optimize the heat transfer rate.

  11. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  12. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  13. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  14. A New Image Processing Procedure Integrating PCI-RPC and ArcGIS-Spline Tools to Improve the Orthorectification Accuracy of High-Resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Hongying Zhang

    2016-10-01

    Full Text Available Given the low accuracy of the traditional remote sensing image processing software when orthorectifying satellite images that cover mountainous areas, and in order to make a full use of mutually compatible and complementary characteristics of the remote sensing image processing software PCI-RPC (Rational Polynomial Coefficients and ArcGIS-Spline, this study puts forward a new operational and effective image processing procedure to improve the accuracy of image orthorectification. The new procedure first processes raw image data into an orthorectified image using PCI with RPC model (PCI-RPC, and then the orthorectified image is further processed using ArcGIS with the Spline tool (ArcGIS-Spline. We used the high-resolution CBERS-02C satellite images (HR1 and HR2 scenes with a pixel size of 2 m acquired from Yangyuan County in Hebei Province of China to test the procedure. In this study, when separately using PCI-RPC and ArcGIS-Spline tools directly to process the HR1/HR2 raw images, the orthorectification accuracies (root mean square errors, RMSEs for HR1/HR2 images were 2.94 m/2.81 m and 4.65 m/4.41 m, respectively. However, when using our newly proposed procedure, the corresponding RMSEs could be reduced to 1.10 m/1.07 m. The experimental results demonstrated that the new image processing procedure which integrates PCI-RPC and ArcGIS-Spline tools could significantly improve image orthorectification accuracy. Therefore, in terms of practice, the new procedure has the potential to use existing software products to easily improve image orthorectification accuracy.

  15. Structured Additive Regression Models: An R Interface to BayesX

    Directory of Open Access Journals (Sweden)

    Nikolaus Umlauf

    2015-02-01

    Full Text Available Structured additive regression (STAR models provide a flexible framework for model- ing possible nonlinear effects of covariates: They contain the well established frameworks of generalized linear models and generalized additive models as special cases but also allow a wider class of effects, e.g., for geographical or spatio-temporal data, allowing for specification of complex and realistic models. BayesX is standalone software package providing software for fitting general class of STAR models. Based on a comprehensive open-source regression toolbox written in C++, BayesX uses Bayesian inference for estimating STAR models based on Markov chain Monte Carlo simulation techniques, a mixed model representation of STAR models, or stepwise regression techniques combining penalized least squares estimation with model selection. BayesX not only covers models for responses from univariate exponential families, but also models from less-standard regression situations such as models for multi-categorical responses with either ordered or unordered categories, continuous time survival data, or continuous time multi-state models. This paper presents a new fully interactive R interface to BayesX: the R package R2BayesX. With the new package, STAR models can be conveniently specified using Rs formula language (with some extended terms, fitted using the BayesX binary, represented in R with objects of suitable classes, and finally printed/summarized/plotted. This makes BayesX much more accessible to users familiar with R and adds extensive graphics capabilities for visualizing fitted STAR models. Furthermore, R2BayesX complements the already impressive capabilities for semiparametric regression in R by a comprehensive toolbox comprising in particular more complex response types and alternative inferential procedures such as simulation-based Bayesian inference.

  16. Counterexamples to the B-spline Conjecture for Gabor Frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Nielsen, Kamilla Haahr

    2016-01-01

    The frame set conjecture for B-splines Bn, n≥2, states that the frame set is the maximal set that avoids the known obstructions. We show that any hyperbola of the form ab=r, where r is a rational number smaller than one and a and b denote the sampling and modulation rates, respectively, has infin...

  17. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  18. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  20. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  1. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  2. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy

  3. The art of regression modeling in road safety

    CERN Document Server

    Hauer, Ezra

    2015-01-01

    This unique book explains how to fashion useful regression models from commonly available data to erect models essential for evidence-based road safety management and research. Composed from techniques and best practices presented over many years of lectures and workshops, The Art of Regression Modeling in Road Safety illustrates that fruitful modeling cannot be done without substantive knowledge about the modeled phenomenon. Class-tested in courses and workshops across North America, the book is ideal for professionals, researchers, university professors, and graduate students with an interest in, or responsibilities related to, road safety. This book also: · Presents for the first time a powerful analytical tool for road safety researchers and practitioners · Includes problems and solutions in each chapter as well as data and spreadsheets for running models and PowerPoint presentation slides · Features pedagogy well-suited for graduate courses and workshops including problems, solutions, and PowerPoint p...

  4. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  5. Robust geographically weighted regression of modeling the Air Polluter Standard Index (APSI)

    Science.gov (United States)

    Warsito, Budi; Yasin, Hasbi; Ispriyanti, Dwi; Hoyyi, Abdul

    2018-05-01

    The Geographically Weighted Regression (GWR) model has been widely applied to many practical fields for exploring spatial heterogenity of a regression model. However, this method is inherently not robust to outliers. Outliers commonly exist in data sets and may lead to a distorted estimate of the underlying regression model. One of solution to handle the outliers in the regression model is to use the robust models. So this model was called Robust Geographically Weighted Regression (RGWR). This research aims to aid the government in the policy making process related to air pollution mitigation by developing a standard index model for air polluter (Air Polluter Standard Index - APSI) based on the RGWR approach. In this research, we also consider seven variables that are directly related to the air pollution level, which are the traffic velocity, the population density, the business center aspect, the air humidity, the wind velocity, the air temperature, and the area size of the urban forest. The best model is determined by the smallest AIC value. There are significance differences between Regression and RGWR in this case, but Basic GWR using the Gaussian kernel is the best model to modeling APSI because it has smallest AIC.

  6. Vibration Analysis of Rectangular Plates with One or More Guided Edges via Bicubic B-Spline Method

    Directory of Open Access Journals (Sweden)

    W.J. Si

    2005-01-01

    Full Text Available A simple and accurate method is proposed for the vibration analysis of rectangular plates with one or more guided edges, in which bicubic B-spline interpolation in combination with a new type of basis cubic B-spline functions is used to approximate the plate deflection. This type of basis cubic B-spline functions can satisfy simply supported, clamped, free, and guided edge conditions with easy numerical manipulation. The frequency characteristic equation is formulated based on classical thin plate theory by performing Hamilton's principle. The present solutions are verified with the analytical ones. Fast convergence, high accuracy and computational efficiency have been demonstrated from the comparisons. Frequency parameters for 13 cases of rectangular plates with at least one guided edge, which are possible by approximate or numerical methods only, are presented. These results are new in literature.

  7. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  8. Mandibular transformations in prepubertal patients following treatment for craniofacial microsomia: thin-plate spline analysis.

    Science.gov (United States)

    Hay, A D; Singh, G D

    2000-01-01

    To analyze correction of mandibular deformity using an inverted L osteotomy and autogenous bone graft in patients exhibiting unilateral craniofacial microsomia (CFM), thin-plate spline analysis was undertaken. Preoperative, early postoperative, and approximately 3.5-year postoperative posteroanterior cephalographs of 15 children (age 10+/-3 years) with CFM were scanned, and eight homologous mandibular landmarks digitized. Average mandibular geometries, scaled to an equivalent size, were generated using Procrustes superimposition. Results indicated that the mean pre- and postoperative mandibular configurations differed statistically (PThin-plate spline analysis indicated that the total spline (Cartesian transformation grid) of the pre- to early postoperative configuration showed mandibular body elongation on the treated side and inferior symphyseal displacement. The affine component of the total spline revealed a clockwise rotation of the preoperative configuration, whereas the nonaffine component was responsible for ramus, body, and symphyseal displacements. The transformation grid for the early and late postoperative comparison showed bilateral ramus elongation. A superior symphyseal displacement contrasted with its earlier inferior displacement, the affine component had translocated the symphyseal landmarks towards the midline. The nonaffine component demonstrated bilateral ramus lengthening, and partial warps suggested that these elongations were slightly greater on the nontreated side. The affine component of the pre- and late postoperative comparison also demonstrated a clockwise rotation. The nonaffine component produced the bilateral ramus elongations-the nontreated side ramus lengthening slightly more than the treated side. It is concluded that an inverted L osteotomy improves mandibular morphology significantly in CFM patients and permits continued bilateral ramus growth. Copyright 2000 Wiley-Liss, Inc.

  9. Micropolar Fluids Using B-spline Divergence Conforming Spaces

    KAUST Repository

    Sarmiento, Adel

    2014-06-06

    We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.

  10. An investigation of temporal regularization techniques for dynamic PET reconstructions using temporal splines

    International Nuclear Information System (INIS)

    Verhaeghe, Jeroen; D'Asseler, Yves; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace

    2007-01-01

    The use of a temporal B-spline basis for the reconstruction of dynamic positron emission tomography data was investigated. Maximum likelihood (ML) reconstructions using an expectation maximization framework and maximum A-posteriori (MAP) reconstructions using the generalized expectation maximization framework were evaluated. Different parameters of the B-spline basis of such as order, number of basis functions and knot placing were investigated in a reconstruction task using simulated dynamic list-mode data. We found that a higher order basis reduced both the bias and variance. Using a higher number of basis functions in the modeling of the time activity curves (TACs) allowed the algorithm to model faster changes of the TACs, however, the TACs became noisier. We have compared ML, Gaussian postsmoothed ML and MAP reconstructions. The noise level in the ML reconstructions was controlled by varying the number of basis functions. The MAP algorithm penalized the integrated squared curvature of the reconstructed TAC. The postsmoothed ML was always outperformed in terms of bias and variance properties by the MAP and ML reconstructions. A simple adaptive knot placing strategy was also developed and evaluated. It is based on an arc length redistribution scheme during the reconstruction. The free knot reconstruction allowed a more accurate reconstruction while reducing the noise level especially for fast changing TACs such as blood input functions. Limiting the number of temporal basis functions combined with the adaptive knot placing strategy is in this case advantageous for regularization purposes when compared to the other regularization techniques

  11. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  12. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  13. Bayesian Inference of a Multivariate Regression Model

    Directory of Open Access Journals (Sweden)

    Marick S. Sinay

    2014-01-01

    Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.

  14. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  15. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    Science.gov (United States)

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  16. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  17. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Science.gov (United States)

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Topology optimization based on spline-based meshfree method using topological derivatives

    International Nuclear Information System (INIS)

    Hur, Junyoung; Youn, Sung-Kie; Kang, Pilseong

    2017-01-01

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  19. Topology optimization based on spline-based meshfree method using topological derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Junyoung; Youn, Sung-Kie [KAIST, Daejeon (Korea, Republic of); Kang, Pilseong [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2017-05-15

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  20. A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation

    International Nuclear Information System (INIS)

    Gao Yuan; Ma Pengcheng

    2011-01-01

    Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.

  1. Application of Multivariate Adaptive Regression Splines to Sheet Metal Bending Process for Springback Compensation

    Directory of Open Access Journals (Sweden)

    Dilan Rasim Aşkın

    2016-01-01

    Full Text Available An intelligent regression technique is applied for sheet metal bending processes to improve bending performance. This study is a part of another extensive study, automated sheet bending assistance for press brakes. Data related to material properties of sheet metal is collected in an online manner and fed to an intelligent system for determining the most accurate punch displacement without any offline iteration or calibration. The overall system aims to reduce the production time while increasing the performance of press brakes.

  2. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  3. A comparison of six metamodeling techniques applied to building performance simulations

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen Enersen

    2018-01-01

    Highlights •Linear regression (OLS), support vector regression (SVR), regression splines (MARS). •Random forest (RF), Gaussian processes (GPR), neural network (NN). •Accuracy, time, interpretability, ease-of-use, model selection, and robustness. •13 problems modelled for 9 training set sizes...

  4. Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its stru....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its......, it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well...

  5. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  6. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  7. The Chaotic Prediction for Aero-Engine Performance Parameters Based on Nonlinear PLS Regression

    Directory of Open Access Journals (Sweden)

    Chunxiao Zhang

    2012-01-01

    Full Text Available The prediction of the aero-engine performance parameters is very important for aero-engine condition monitoring and fault diagnosis. In this paper, the chaotic phase space of engine exhaust temperature (EGT time series which come from actual air-borne ACARS data is reconstructed through selecting some suitable nearby points. The partial least square (PLS based on the cubic spline function or the kernel function transformation is adopted to obtain chaotic predictive function of EGT series. The experiment results indicate that the proposed PLS chaotic prediction algorithm based on biweight kernel function transformation has significant advantage in overcoming multicollinearity of the independent variables and solve the stability of regression model. Our predictive NMSE is 16.5 percent less than that of the traditional linear least squares (OLS method and 10.38 percent less than that of the linear PLS approach. At the same time, the forecast error is less than that of nonlinear PLS algorithm through bootstrap test screening.

  8. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  9. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    Science.gov (United States)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  10. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  11. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  12. The core spline method for solution of quantum-mechanical systems of differential equations for bound states

    International Nuclear Information System (INIS)

    Aleksandrov, L.; Drenska, M.; Karadzhov, D.

    1986-01-01

    A generalization of the core spline method is given in the case of solution of the general bound state problem for a system of M linear differential equations with coefficients depending on the spectral parameter. The recursion scheme for construction of basic splines is described. The wave functions are expressed as linear combinations of basic splines, which are approximate partial solutions of the system. The spectral parameter (the eigenvalue) is determined from the condition for existence of a nontrivial solution of a (MxM) linear algebraic system at the last collocation point. The nontrivial solutions of this system determine (M - 1) coefficients of the linear spans, expressing the wave functions. The last unknown coefficient is determined from a boundary (or normalization) condition for the system. The computational aspects of the method are discussed, in particular, its concrete algorithmic realization used in the RODSOL program. The numerical solution of the Dirac system for the bound states of a hydrogen atom is given is an example

  13. Improvement of the Cubic Spline Function Sets for a Synthesis of the Axial Power Distribution of a Core Protection System

    International Nuclear Information System (INIS)

    Koo, Bon-Seung; Lee, Chung-Chan; Zee, Sung-Quun

    2006-01-01

    Online digital core protection system(SCOPS) for a system-integrated modular reactor is being developed as a part of a plant protection system at KAERI. SCOPS calculates the minimum CHFR and maximum LPD based on several online measured system parameters including 3-level ex-core detector signals. In conventional ABB-CE digital power plants, cubic spline synthesis technique has been used in online calculations of the core axial power distributions using ex-core detector signals once every 1 second in CPC. In CPC, pre-determined cubic spline function sets are used depending on the characteristics of the ex-core detector responses. But this method shows an unnegligible power distribution error for the extremely skewed axial shapes by using restrictive function sets. Therefore, this paper describes the cubic spline method for the synthesis of an axial power distribution and it generates several new cubic spline function sets for the application of the core protection system, especially for the severely distorted power shapes needed reactor type

  14. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  15. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  16. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  17. Statistical approach for selection of regression model during validation of bioanalytical method

    Directory of Open Access Journals (Sweden)

    Natalija Nakov

    2014-06-01

    Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.

  18. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  19. Modelling time course gene expression data with finite mixtures of linear additive models.

    Science.gov (United States)

    Grün, Bettina; Scharl, Theresa; Leisch, Friedrich

    2012-01-15

    A model class of finite mixtures of linear additive models is presented. The component-specific parameters in the regression models are estimated using regularized likelihood methods. The advantages of the regularization are that (i) the pre-specified maximum degrees of freedom for the splines is less crucial than for unregularized estimation and that (ii) for each component individually a suitable degree of freedom is selected in an automatic way. The performance is evaluated in a simulation study with artificial data as well as on a yeast cell cycle dataset of gene expression levels over time. The latest release version of the R package flexmix is available from CRAN (http://cran.r-project.org/).

  20. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  1. Poisson regression for modeling count and frequency outcomes in trauma research.

    Science.gov (United States)

    Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

    2008-10-01

    The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

  2. Full-turn symplectic map from a generator in a Fourier-spline basis

    International Nuclear Information System (INIS)

    Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.

    1993-04-01

    Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton's method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10 7 turns on an IBM RS6000 model 320 workstation, in a run of about one day

  3. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Directory of Open Access Journals (Sweden)

    Minh Vu Trieu

    2017-03-01

    Full Text Available This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS, Brazilian tensile strength (BTS, rock brittleness index (BI, the distance between planes of weakness (DPW, and the alpha angle (Alpha between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP. Four (4 statistical regression models (two linear and two nonlinear are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2 of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  4. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Science.gov (United States)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  5. Deep ensemble learning of sparse regression models for brain disease diagnosis.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2017-04-01

    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  7. Application of thin-plate spline transformations to finite element models, or, how to turn a bog turtle into a spotted turtle to analyze both.

    Science.gov (United States)

    Stayton, C Tristan

    2009-05-01

    Finite element (FE) models are popular tools that allow biologists to analyze the biomechanical behavior of complex anatomical structures. However, the expense and time required to create models from specimens has prevented comparative studies from involving large numbers of species. A new method is presented for transforming existing FE models using geometric morphometric methods. Homologous landmark coordinates are digitized on the FE model and on a target specimen into which the FE model is being transformed. These coordinates are used to create a thin-plate spline function and coefficients, which are then applied to every node in the FE model. This function smoothly interpolates the location of points between landmarks, transforming the geometry of the original model to match the target. This new FE model is then used as input in FE analyses. This procedure is demonstrated with turtle shells: a Glyptemys muhlenbergii model is transformed into Clemmys guttata and Actinemys marmorata models. Models are loaded and the resulting stresses are compared. The validity of the models is tested by crushing actual turtle shells in a materials testing machine and comparing those results to predictions from FE models. General guidelines, cautions, and possibilities for this procedure are also presented.

  8. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  9. A generalized right truncated bivariate Poisson regression model with applications to health data.

    Science.gov (United States)

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  10. Selected Aspects of Wear Affecting Keyed Joints and Spline Connections During Operation of Aircrafts

    Directory of Open Access Journals (Sweden)

    Gębura Andrzej

    2014-12-01

    Full Text Available The paper deals with selected deficiencies of spline connections, such as angular or parallel misalignment (eccentricity and excessive play. It is emphasized how important these deficiencies are for smooth operation of the entire driving units. The aim of the study is to provide a kind of a reference list with such deficiencies with visual symptoms of wear, specification of mechanical measurements for mating surfaces, mathematical description of waveforms for dynamic variability of motion in such connections and visualizations of the connection behaviour acquired with the use of the FAM-C and FDM-A. Attention is paid to hazards to flight safety when excessively worn spline connections are operated for long periods of time

  11. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  12. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    Science.gov (United States)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and

  13. Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set

    Directory of Open Access Journals (Sweden)

    M Barezi

    2011-03-01

    Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.

  14. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  15. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  16. LINEAR REGRESSION MODEL ESTİMATİON FOR RIGHT CENSORED DATA

    Directory of Open Access Journals (Sweden)

    Ersin Yılmaz

    2016-05-01

    Full Text Available In this study, firstly we will define a right censored data. If we say shortly right-censored data is censoring values that above the exact line. This may be related with scaling device. And then  we will use response variable acquainted from right-censored explanatory variables. Then the linear regression model will be estimated. For censored data’s existence, Kaplan-Meier weights will be used for  the estimation of the model. With the weights regression model  will be consistent and unbiased with that.   And also there is a method for the censored data that is a semi parametric regression and this method also give  useful results  for censored data too. This study also might be useful for the health studies because of the censored data used in medical issues generally.

  17. Developing and testing a global-scale regression model to quantify mean annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  18. La obtención y proyección de tablas de mortalidad empleando curvas. Spline

    Directory of Open Access Journals (Sweden)

    Alejandro MINA-VALDÉS

    2011-01-01

    Full Text Available Una de las herramientas del análisis numérico es el uso de polinomios de n-ésimo orden para interpolar entre n + 1 puntos, teniéndose casos en donde estas funciones polinómicas pueden llevar a resultados erróneos. Una alternativa es la de aplicar polinomios de orden inferior a subconjuntos de datos. Estos polinomios conectados se llaman funciones de interpolación segmentaria (spline functions. En este artículo se presenta la herramienta que el análisis numérico proporciona como instrumento técnico necesario para llevar a cabo todos los procedimientos matemáticos existentes con base a algoritmos que permitan su simulación o cálculo, en especial, las funciones splines definidas a trozos (por tramos, con interpolación mediante ellas, dando lugar a el ajuste de curvas spline con base en la serie de sobrevivientes lx de una tabla abreviada de mortalidad mexicana, con el fin de desagregarla por edad desplegada, respetando las concavidades que por el efecto de la mortalidad en las primeras edades y en las siguientes se tienen en la experiencia mexicana. También empleando las curvas splines se presentan las simulaciones que permiten obtener escenarios futuros de las series de sobrevivientes lx, que dan lugar a las proyecciones de la mortalidad mexicana para los años 2010-2050, las que generan las tablas completas de mortalidad para hombres y mujeres de dicho periodo, resaltando las diferencias entre sexos y edades de sus probabilidades de supervivencia y las ganancias en las esperanzas de vida.

  19. Ethnicity and skeletal Class III morphology: a pubertal growth analysis using thin-plate spline analysis.

    Science.gov (United States)

    Alkhamrah, B; Terada, K; Yamaki, M; Ali, I M; Hanada, K

    2001-01-01

    A longitudinal retrospective study using thin-plate spline analysis was used to investigate skeletal Class III etiology in Japanese female adolescents. Headfilms of 40 subjects were chosen from the archives of the Orthodontic department at Niigata University Dental Hospital, and were traced at IIIB and IVA Hellman dental ages. Twenty-eight homologous landmarks, representing hard and soft tissue, were digitized. These were used to reproduce a consensus for the profilogram, craniomaxillary complex, mandible, and soft tissue for each age and skeletal group. Generalized least-square analysis revealed a significant shape difference between age-matched groups (P spline and partial warps (PW)3 and 2 showed a maxillary retrusion at stage IIIB opposite an acute cranial base at stage IVA. Mandibular total spline and PW4, 5 showed changes affecting most landmarks and their spatial interrelationship, especially a stretch along the articulare-pogonion axis. In soft tissue analysis, PW8 showed large and local changes which paralleled the underlying hard tissue components. Allometry of the mandible and anisotropy of the cranial base, the maxilla, and the mandible asserted the complexity of craniofacial growth and the difficulty of predicting its outcome.

  20. RANCANG BANGUN PROGRAM PENGEDITAN KURVA B-SPLINE MULTIRESOLUSI BERBASIS WAVELETS

    Directory of Open Access Journals (Sweden)

    Nanik Suciati

    2002-07-01

    Full Text Available Penelitian ini menyusun representasi multiresolusi untuk kurva B-spline kubik yang menginterpolasi titik-titik ujung dengan basis wavelets. Representasi multiresolusi ini digunakan untuk mendukung beberapa tipe pengeditan kurva, yaitu penghalusan kurva dengan tingkat resolusi kontinyu untuk menghilangkan detail-detail kurva yang tidak diinginkan, pengeditan bentuk keseluruhan kurva dengan tetap mempertahankan detaildetailnya, perubahan detail-detail kurva tanpa mempengaruhi bentuk keseluruhannya, dan pengeditan satubagian tertentu dari kurva melalui manipulasi secara langsung terhadap titik-titik kontrolnya. Untuk menguji kemampuan representasi multiresolusi dalam mendukung empat tipe manipulasi kurva tersebut, disusun program pengeditan kurva dengan menggunakan bahasa pemrograman Visual C++ pada komputer Pentium 133 MHz, memori 16 Mbyte, sistem operasi Windows 95, lingkungan pengembangan Microsoft DevelopmentStudio 97 dan pustaka Microsoft Foundation Class. Dari hasil uji coba program diketahui bahwa representasi multiresolusi memberikan dukungan yang sangat baik terhadap tipe-tipe pengeditan seperti yang disebutkan di atas. Representasi multiresolusi tidak membutuhkan memori penyimpan ekstra selain dari yang digunakan untuk menyimpan titik kontrol. Dari hasil uji coba program menggunakan ratusan titik kontrol, algoritma berjalan cukup cepat dan memadai berkaitan dengan tuntutan komunikasi interaktif antara user dan program.Kata kunci: B-Spline, Wavelet, Multiresolusi

  1. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    Science.gov (United States)

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  2. Hybrid B-Spline Collocation Method for Solving the Generalized Burgers-Fisher and Burgers-Huxley Equations

    Directory of Open Access Journals (Sweden)

    Imtiaz Wasim

    2018-01-01

    Full Text Available In this study, we introduce a new numerical technique for solving nonlinear generalized Burgers-Fisher and Burgers-Huxley equations using hybrid B-spline collocation method. This technique is based on usual finite difference scheme and Crank-Nicolson method which are used to discretize the time derivative and spatial derivatives, respectively. Furthermore, hybrid B-spline function is utilized as interpolating functions in spatial dimension. The scheme is verified unconditionally stable using the Von Neumann (Fourier method. Several test problems are considered to check the accuracy of the proposed scheme. The numerical results are in good agreement with known exact solutions and the existing schemes in literature.

  3. Adaptive regression for modeling nonlinear relationships

    CERN Document Server

    Knafl, George J

    2016-01-01

    This book presents methods for investigating whether relationships are linear or nonlinear and for adaptively fitting appropriate models when they are nonlinear. Data analysts will learn how to incorporate nonlinearity in one or more predictor variables into regression models for different types of outcome variables. Such nonlinear dependence is often not considered in applied research, yet nonlinear relationships are common and so need to be addressed. A standard linear analysis can produce misleading conclusions, while a nonlinear analysis can provide novel insights into data, not otherwise possible. A variety of examples of the benefits of modeling nonlinear relationships are presented throughout the book. Methods are covered using what are called fractional polynomials based on real-valued power transformations of primary predictor variables combined with model selection based on likelihood cross-validation. The book covers how to formulate and conduct such adaptive fractional polynomial modeling in the s...

  4. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  5. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  6. Electricity consumption forecasting in Italy using linear regression models

    Energy Technology Data Exchange (ETDEWEB)

    Bianco, Vincenzo; Manca, Oronzio; Nardini, Sergio [DIAM, Seconda Universita degli Studi di Napoli, Via Roma 29, 81031 Aversa (CE) (Italy)

    2009-09-15

    The influence of economic and demographic variables on the annual electricity consumption in Italy has been investigated with the intention to develop a long-term consumption forecasting model. The time period considered for the historical data is from 1970 to 2007. Different regression models were developed, using historical electricity consumption, gross domestic product (GDP), gross domestic product per capita (GDP per capita) and population. A first part of the paper considers the estimation of GDP, price and GDP per capita elasticities of domestic and non-domestic electricity consumption. The domestic and non-domestic short run price elasticities are found to be both approximately equal to -0.06, while long run elasticities are equal to -0.24 and -0.09, respectively. On the contrary, the elasticities of GDP and GDP per capita present higher values. In the second part of the paper, different regression models, based on co-integrated or stationary data, are presented. Different statistical tests are employed to check the validity of the proposed models. A comparison with national forecasts, based on complex econometric models, such as Markal-Time, was performed, showing that the developed regressions are congruent with the official projections, with deviations of {+-}1% for the best case and {+-}11% for the worst. These deviations are to be considered acceptable in relation to the time span taken into account. (author)

  7. Electricity consumption forecasting in Italy using linear regression models

    International Nuclear Information System (INIS)

    Bianco, Vincenzo; Manca, Oronzio; Nardini, Sergio

    2009-01-01

    The influence of economic and demographic variables on the annual electricity consumption in Italy has been investigated with the intention to develop a long-term consumption forecasting model. The time period considered for the historical data is from 1970 to 2007. Different regression models were developed, using historical electricity consumption, gross domestic product (GDP), gross domestic product per capita (GDP per capita) and population. A first part of the paper considers the estimation of GDP, price and GDP per capita elasticities of domestic and non-domestic electricity consumption. The domestic and non-domestic short run price elasticities are found to be both approximately equal to -0.06, while long run elasticities are equal to -0.24 and -0.09, respectively. On the contrary, the elasticities of GDP and GDP per capita present higher values. In the second part of the paper, different regression models, based on co-integrated or stationary data, are presented. Different statistical tests are employed to check the validity of the proposed models. A comparison with national forecasts, based on complex econometric models, such as Markal-Time, was performed, showing that the developed regressions are congruent with the official projections, with deviations of ±1% for the best case and ±11% for the worst. These deviations are to be considered acceptable in relation to the time span taken into account. (author)

  8. Mutual information based CT registration of the lung at exhale and inhale breathing states using thin-plate splines

    International Nuclear Information System (INIS)

    Coselmon, Martha M.; Balter, James M.; McShan, Daniel L.; Kessler, Marc L.

    2004-01-01

    The advent of dynamic radiotherapy modeling and treatment techniques requires an infrastructure to weigh the merits of various interventions (breath holding, gating, tracking). The creation of treatment planning models that account for motion and deformation can allow the relative worth of such techniques to be evaluated. In order to develop a treatment planning model of a moving and deforming organ such as the lung, registration tools that account for deformation are required. We tested the accuracy of a mutual information based image registration tool using thin-plate splines driven by the selection of control points and iterative alignment according to a simplex algorithm. Eleven patients each had sequential CT scans at breath-held normal inhale and exhale states. The exhale right lung was segmented from CT and served as the reference model. For each patient, thirty control points were used to align the inhale CT right lung to the exhale CT right lung. Alignment accuracy (the standard deviation of the difference in the actual and predicted inhale position) was determined from locations of vascular and bronchial bifurcations, and found to be 1.7, 3.1, and 3.6 mm about the RL, AP, and IS directions. The alignment accuracy was significantly different from the amount of measured movement during breathing only in the AP and IS directions. The accuracy of alignment including thin-plate splines was more accurate than using affine transformations and the same iteration and scoring methodology. This technique shows promise for the future development of dynamic models of the lung for use in four-dimensional (4-D) treatment planning

  9. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    Science.gov (United States)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  10. Augmented Beta rectangular regression models: A Bayesian perspective.

    Science.gov (United States)

    Wang, Jue; Luo, Sheng

    2016-01-01

    Mixed effects Beta regression models based on Beta distributions have been widely used to analyze longitudinal percentage or proportional data ranging between zero and one. However, Beta distributions are not flexible to extreme outliers or excessive events around tail areas, and they do not account for the presence of the boundary values zeros and ones because these values are not in the support of the Beta distributions. To address these issues, we propose a mixed effects model using Beta rectangular distribution and augment it with the probabilities of zero and one. We conduct extensive simulation studies to assess the performance of mixed effects models based on both the Beta and Beta rectangular distributions under various scenarios. The simulation studies suggest that the regression models based on Beta rectangular distributions improve the accuracy of parameter estimates in the presence of outliers and heavy tails. The proposed models are applied to the motivating Neuroprotection Exploratory Trials in Parkinson's Disease (PD) Long-term Study-1 (LS-1 study, n = 1741), developed by The National Institute of Neurological Disorders and Stroke Exploratory Trials in Parkinson's Disease (NINDS NET-PD) network. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Bayesian semiparametric regression models to characterize molecular evolution

    Directory of Open Access Journals (Sweden)

    Datta Saheli

    2012-10-01

    Full Text Available Abstract Background Statistical models and methods that associate changes in the physicochemical properties of amino acids with natural selection at the molecular level typically do not take into account the correlations between such properties. We propose a Bayesian hierarchical regression model with a generalization of the Dirichlet process prior on the distribution of the regression coefficients that describes the relationship between the changes in amino acid distances and natural selection in protein-coding DNA sequence alignments. Results The Bayesian semiparametric approach is illustrated with simulated data and the abalone lysin sperm data. Our method identifies groups of properties which, for this particular dataset, have a similar effect on evolution. The model also provides nonparametric site-specific estimates for the strength of conservation of these properties. Conclusions The model described here is distinguished by its ability to handle a large number of amino acid properties simultaneously, while taking into account that such data can be correlated. The multi-level clustering ability of the model allows for appealing interpretations of the results in terms of properties that are roughly equivalent from the standpoint of molecular evolution.

  12. Regression analysis of a chemical reaction fouling model

    International Nuclear Information System (INIS)

    Vasak, F.; Epstein, N.

    1996-01-01

    A previously reported mathematical model for the initial chemical reaction fouling of a heated tube is critically examined in the light of the experimental data for which it was developed. A regression analysis of the model with respect to that data shows that the reference point upon which the two adjustable parameters of the model were originally based was well chosen, albeit fortuitously. (author). 3 refs., 2 tabs., 2 figs

  13. Correlation-regression model for physico-chemical quality of ...

    African Journals Online (AJOL)

    abusaad

    areas, suggesting that groundwater quality in urban areas is closely related with land use ... the ground water, with correlation and regression model is also presented. ...... WHO (World Health Organization) (1985). Health hazards from nitrates.

  14. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  15. Spline Collocation Method for Nonlinear Multi-Term Fractional Differential Equation

    OpenAIRE

    Choe, Hui-Chol; Kang, Yong-Suk

    2013-01-01

    We study an approximation method to solve nonlinear multi-term fractional differential equations with initial conditions or boundary conditions. First, we transform the nonlinear multi-term fractional differential equations with initial conditions and boundary conditions to nonlinear fractional integral equations and consider the relations between them. We present a Spline Collocation Method and prove the existence, uniqueness and convergence of approximate solution as well as error estimatio...

  16. Multiple linear regression and regression with time series error models in forecasting PM10 concentrations in Peninsular Malaysia.

    Science.gov (United States)

    Ng, Kar Yong; Awang, Norhashidah

    2018-01-06

    Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst.

  17. Multivariate Frequency-Severity Regression Models in Insurance

    Directory of Open Access Journals (Sweden)

    Edward W. Frees

    2016-02-01

    Full Text Available In insurance and related industries including healthcare, it is common to have several outcome measures that the analyst wishes to understand using explanatory variables. For example, in automobile insurance, an accident may result in payments for damage to one’s own vehicle, damage to another party’s vehicle, or personal injury. It is also common to be interested in the frequency of accidents in addition to the severity of the claim amounts. This paper synthesizes and extends the literature on multivariate frequency-severity regression modeling with a focus on insurance industry applications. Regression models for understanding the distribution of each outcome continue to be developed yet there now exists a solid body of literature for the marginal outcomes. This paper contributes to this body of literature by focusing on the use of a copula for modeling the dependence among these outcomes; a major advantage of this tool is that it preserves the body of work established for marginal models. We illustrate this approach using data from the Wisconsin Local Government Property Insurance Fund. This fund offers insurance protection for (i property; (ii motor vehicle; and (iii contractors’ equipment claims. In addition to several claim types and frequency-severity components, outcomes can be further categorized by time and space, requiring complex dependency modeling. We find significant dependencies for these data; specifically, we find that dependencies among lines are stronger than the dependencies between the frequency and average severity within each line.

  18. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  19. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template

    Directory of Open Access Journals (Sweden)

    Rachel Jewkes

    2018-02-01

    Full Text Available The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries.

  20. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template.

    Science.gov (United States)

    Jewkes, Rachel; Burton, Hanna E; Espino, Daniel M

    2018-02-02

    The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending) and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries.

  1. Application of random regression models to the genetic evaluation ...

    African Journals Online (AJOL)

    The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

  2. Optimization and parallelization of B-spline based orbital evaluations in QMC on multi/many-core shared memory processors

    OpenAIRE

    Mathuriya, Amrita; Luo, Ye; Benali, Anouar; Shulenburger, Luke; Kim, Jeongnim

    2016-01-01

    B-spline based orbital representations are widely used in Quantum Monte Carlo (QMC) simulations of solids, historically taking as much as 50% of the total run time. Random accesses to a large four-dimensional array make it challenging to efficiently utilize caches and wide vector units of modern CPUs. We present node-level optimizations of B-spline evaluations on multi/many-core shared memory processors. To increase SIMD efficiency and bandwidth utilization, we first apply data layout transfo...

  3. Analysis of petroleum contaminated soils by spectral modeling and pure response profile recovery of n-hexane

    International Nuclear Information System (INIS)

    Chakraborty, Somsubhra; Weindorf, David C.; Li, Bin; Ali, Md. Nasim; Majumdar, K.; Ray, D.P.

    2014-01-01

    This pilot study compared penalized spline regression (PSR) and random forest (RF) regression using visible and near-infrared diffuse reflectance spectroscopy (VisNIR DRS) derived spectra of 164 petroleum contaminated soils after two different spectral pretreatments [first derivative (FD) and standard normal variate (SNV) followed by detrending] for rapid quantification of soil petroleum contamination. Additionally, a new analytical approach was proposed for the recovery of the pure spectral and concentration profiles of n-hexane present in the unresolved mixture of petroleum contaminated soils using multivariate curve resolution alternating least squares (MCR-ALS). The PSR model using FD spectra (r 2  = 0.87, RMSE = 0.580 log 10  mg kg −1 , and residual prediction deviation = 2.78) outperformed all other models tested. Quantitative results obtained by MCR-ALS for n-hexane in presence of interferences (r 2  = 0.65 and RMSE 0.261 log 10  mg kg −1 ) were comparable to those obtained using FD (PSR) model. Furthermore, MCR ALS was able to recover pure spectra of n-hexane. - Highlights: • We predicted soil petroleum contamination with VisNIR DRS spectra. • We examined 2 spectral pretreatments and 2 multivariate models. • MCR-ALS was used for compositional and spectral resolution of n-hexane. • Penalized spline regression performed best for quantifying soil TPH. • MCR-ALS was promising for resolution of complex soil–petroleum mixture. - Use of VisNIR DRS for rapid quantification of soil TPH and resolution of complex soil petroleum mixtures

  4. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  5. Multitask Quantile Regression under the Transnormal Model.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2016-01-01

    We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.

  6. Local and Global Path Generation for Autonomous Vehicles Using SplinesGeneración Local y Global de Trayectorias para Vehículo Autónomos Usando Splines

    Directory of Open Access Journals (Sweden)

    Randerson Lemos

    2016-05-01

    Full Text Available Abstract Context: Before autonomous vehicles being a reality in daily situations, outstanding issues regarding the techniques of autonomous mobility must be solved. Hence, relevant aspects of a path planning for terrestrial vehicles are shown. Method: The approached path planning technique uses splines to generate the global route. For this goal, waypoints obtained from online map services are used. With the global route parametrized in the arc-length, candidate local paths are computed and the optimal one is selected by cost functions. Results: Different routes are used to show that the number and distribution of waypoints are highly correlated to a satisfactory arc-length parameterization of the global route, which is essential to the proper behavior of the path planning technique. Conclusions: The cubic splines approach to the path planning problem successfully generates the global and local paths. Nevertheless, the use of raw data from the online map services showed to be unfeasible due the consistency of the data. Hence, a preprocessing stage of the raw data is proposed to guarantee the well behavior and robustness of the technique.

  7. A modified linear algebraic approach to electron scattering using cubic splines

    International Nuclear Information System (INIS)

    Kinney, R.A.

    1986-01-01

    A modified linear algebraic approach to the solution of the Schrodiner equation for low-energy electron scattering is presented. The method uses a piecewise cubic-spline approximation of the wavefunction. Results in the static-potential and the static-exchange approximations for e - +H s-wave scattering are compared with unmodified linear algebraic and variational linear algebraic methods. (author)

  8. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  9. Profile-driven regression for modeling and runtime optimization of mobile networks

    DEFF Research Database (Denmark)

    McClary, Dan; Syrotiuk, Violet; Kulahci, Murat

    2010-01-01

    Computer networks often display nonlinear behavior when examined over a wide range of operating conditions. There are few strategies available for modeling such behavior and optimizing such systems as they run. Profile-driven regression is developed and applied to modeling and runtime optimization...... of throughput in a mobile ad hoc network, a self-organizing collection of mobile wireless nodes without any fixed infrastructure. The intermediate models generated in profile-driven regression are used to fit an overall model of throughput, and are also used to optimize controllable factors at runtime. Unlike...

  10. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  11. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  12. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    Science.gov (United States)

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  13. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  14. Vector splines on the sphere with application to the estimation of vorticity and divergence from discrete, noisy data

    Science.gov (United States)

    Wahba, G.

    1982-01-01

    Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.

  15. Direct modeling of regression effects for transition probabilities in the progressive illness-death model

    DEFF Research Database (Denmark)

    Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo

    2017-01-01

    In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness–death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score...

  16. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data

    OpenAIRE

    Garc?a Nieto, Paulino Jos?; Garc?a-Gonzalo, Esperanza; Ord??ez Gal?n, Celestino; Bernardo S?nchez, Antonio

    2016-01-01

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism i...

  17. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  18. Thin-plate spline analysis of the effects of face mask treatment in children with maxillary retrognathism.

    Science.gov (United States)

    Chang, Jenny Zwei-Chieng; Liu, Pao-Hsin; Chen, Yi-Jane; Yao, Jane Chung-Chen; Chang, Hong-Po; Chang, Chih-Han; Chang, Frank Hsin-Fu

    2006-02-01

    Face mask therapy is indicated for growing patients who suffer from maxillary retrognathia. Most previous studies used conventional cephalometric analysis to evaluate the effects of face mask treatment. Cephalometric analysis has been shown to be insufficient for complex craniofacial configurations. The purpose of this study was to investigate changes in the craniofacial structure of children with maxillary retrognathism following face mask treatment by means of thin-plate spline analysis. Thirty children with skeletal Class III malocclusions who had been treated with face masks were compared with a group of 30 untreated gender-matched, age-matched, observation period-matched, and craniofacial configuration-matched subjects. Average geometries, scaled to an equivalent size, were generated by means of Procrustes analysis. Thin-plate spline analysis was then performed for localization of the shape changes. Face mask treatment induced a forward displacement of the maxilla, a counterclockwise rotation of the palatal plane, a horizontal compression of the anterior border of the symphysis and the condylar region, and a downward deformation of the menton. The cranial base exhibited a counterclockwise deformation as a whole. We conclude that thin-plate spline analysis is a valuable supplement to conventional cephalometric analysis.

  19. Alignment of large image series using cubic B-splines tessellation: application to transmission electron microscopy data.

    Science.gov (United States)

    Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K

    2007-01-01

    3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.

  20. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    Science.gov (United States)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  1. Modeling Governance KB with CATPCA to Overcome Multicollinearity in the Logistic Regression

    Science.gov (United States)

    Khikmah, L.; Wijayanto, H.; Syafitri, U. D.

    2017-04-01

    The problem often encounters in logistic regression modeling are multicollinearity problems. Data that have multicollinearity between explanatory variables with the result in the estimation of parameters to be bias. Besides, the multicollinearity will result in error in the classification. In general, to overcome multicollinearity in regression used stepwise regression. They are also another method to overcome multicollinearity which involves all variable for prediction. That is Principal Component Analysis (PCA). However, classical PCA in only for numeric data. Its data are categorical, one method to solve the problems is Categorical Principal Component Analysis (CATPCA). Data were used in this research were a part of data Demographic and Population Survey Indonesia (IDHS) 2012. This research focuses on the characteristic of women of using the contraceptive methods. Classification results evaluated using Area Under Curve (AUC) values. The higher the AUC value, the better. Based on AUC values, the classification of the contraceptive method using stepwise method (58.66%) is better than the logistic regression model (57.39%) and CATPCA (57.39%). Evaluation of the results of logistic regression using sensitivity, shows the opposite where CATPCA method (99.79%) is better than logistic regression method (92.43%) and stepwise (92.05%). Therefore in this study focuses on major class classification (using a contraceptive method), then the selected model is CATPCA because it can raise the level of the major class model accuracy.

  2. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  3. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    Science.gov (United States)

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  4. [Non-rigid medical image registration based on mutual information and thin-plate spline].

    Science.gov (United States)

    Cao, Guo-gang; Luo, Li-min

    2009-01-01

    To get precise and complete details, the contrast in different images is needed in medical diagnosis and computer assisted treatment. The image registration is the basis of contrast, but the regular rigid registration does not satisfy the clinic requirements. A non-rigid medical image registration method based on mutual information and thin-plate spline was present. Firstly, registering two images globally based on mutual information; secondly, dividing reference image and global-registered image into blocks and registering them; then getting the thin-plate spline transformation according to the shift of blocks' center; finally, applying the transformation to the global-registered image. The results show that the method is more precise than the global rigid registration based on mutual information and it reduces the complexity of getting control points and satisfy the clinic requirements better by getting control points of the thin-plate transformation automatically.

  5. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  6. Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant

    Science.gov (United States)

    Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens

    2018-02-01

    Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.

  7. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  8. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  9. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  10. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  11. Logistic regression for risk factor modelling in stuttering research.

    Science.gov (United States)

    Reed, Phil; Wu, Yaqionq

    2013-06-01

    To outline the uses of logistic regression and other statistical methods for risk factor analysis in the context of research on stuttering. The principles underlying the application of a logistic regression are illustrated, and the types of questions to which such a technique has been applied in the stuttering field are outlined. The assumptions and limitations of the technique are discussed with respect to existing stuttering research, and with respect to formulating appropriate research strategies to accommodate these considerations. Finally, some alternatives to the approach are briefly discussed. The way the statistical procedures are employed are demonstrated with some hypothetical data. Research into several practical issues concerning stuttering could benefit if risk factor modelling were used. Important examples are early diagnosis, prognosis (whether a child will recover or persist) and assessment of treatment outcome. After reading this article you will: (a) Summarize the situations in which logistic regression can be applied to a range of issues about stuttering; (b) Follow the steps in performing a logistic regression analysis; (c) Describe the assumptions of the logistic regression technique and the precautions that need to be checked when it is employed; (d) Be able to summarize its advantages over other techniques like estimation of group differences and simple regression. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  13. Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael; Calo, Victor M.

    2015-01-01

    We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention

  14. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  15. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  16. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  17. Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report

    Science.gov (United States)

    1978-01-01

    Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.

  18. Determining factors influencing survival of breast cancer by fuzzy logistic regression model.

    Science.gov (United States)

    Nikbakht, Roya; Bahrampour, Abbas

    2017-01-01

    Fuzzy logistic regression model can be used for determining influential factors of disease. This study explores the important factors of actual predictive survival factors of breast cancer's patients. We used breast cancer data which collected by cancer registry of Kerman University of Medical Sciences during the period of 2000-2007. The variables such as morphology, grade, age, and treatments (surgery, radiotherapy, and chemotherapy) were applied in the fuzzy logistic regression model. Performance of model was determined in terms of mean degree of membership (MDM). The study results showed that almost 41% of patients were in neoplasm and malignant group and more than two-third of them were still alive after 5-year follow-up. Based on the fuzzy logistic model, the most important factors influencing survival were chemotherapy, morphology, and radiotherapy, respectively. Furthermore, the MDM criteria show that the fuzzy logistic regression have a good fit on the data (MDM = 0.86). Fuzzy logistic regression model showed that chemotherapy is more important than radiotherapy in survival of patients with breast cancer. In addition, another ability of this model is calculating possibilistic odds of survival in cancer patients. The results of this study can be applied in clinical research. Furthermore, there are few studies which applied the fuzzy logistic models. Furthermore, we recommend using this model in various research areas.

  19. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  20. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  1. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  2. Using the classical linear regression model in analysis of the dependences of conveyor belt life

    Directory of Open Access Journals (Sweden)

    Miriam Andrejiová

    2013-12-01

    Full Text Available The paper deals with the classical linear regression model of the dependence of conveyor belt life on some selected parameters: thickness of paint layer, width and length of the belt, conveyor speed and quantity of transported material. The first part of the article is about regression model design, point and interval estimation of parameters, verification of statistical significance of the model, and about the parameters of the proposed regression model. The second part of the article deals with identification of influential and extreme values that can have an impact on estimation of regression model parameters. The third part focuses on assumptions of the classical regression model, i.e. on verification of independence assumptions, normality and homoscedasticity of residuals.

  3. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    Science.gov (United States)

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  4. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Fuzzy topological digital space and digital fuzzy spline of electroencephalography during epileptic seizures

    Science.gov (United States)

    Shah, Mazlina Muzafar; Wahab, Abdul Fatah

    2017-08-01

    Epilepsy disease occurs because of there is a temporary electrical disturbance in a group of brain cells (nurons). The recording of electrical signals come from the human brain which can be collected from the scalp of the head is called Electroencephalography (EEG). EEG then considered in digital format and in fuzzy form makes it a fuzzy digital space data form. The purpose of research is to identify the area (curve and surface) in fuzzy digital space affected by inside epilepsy seizure in epileptic patient's brain. The main focus for this research is to generalize fuzzy topological digital space, definition and basic operation also the properties by using digital fuzzy set and the operations. By using fuzzy digital space, the theory of digital fuzzy spline can be introduced to replace grid data that has been use previously to get better result. As a result, the flat of EEG can be fuzzy topological digital space and this type of data can be use to interpolate the digital fuzzy spline.

  6. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  7. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  8. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  9. Modeling of Output Characteristics of a UV Cu+ Ne-CuBr Laser

    Directory of Open Access Journals (Sweden)

    Snezhana Georgieva Gocheva-Ilieva

    2012-01-01

    Full Text Available This paper examines experiment data for a Ne-CuBr UV copper ion laser excited by longitudinal pulsed discharge emitting in multiline regime. The flexible multivariate adaptive regression splines (MARSs method has been used to develop nonparametric regression models describing the laser output power and service life of the devices. The models have been constructed as explicit functions of 9 basic input laser characteristics. The obtained models account for local nonlinearities of the relationships within the various multivariate subregions. The built best MARS models account for over 98% of data. The models are used to estimate the investigated output laser characteristics of existing UV lasers. The capabilities for using the models in predicting existing and future experiments have been demonstrated. Specific analyses have been presented comparing the models with actual experiments. The obtained results are applicable for guiding and planning the engineering experiment. The modeling methodology can be applied for a wide range of similar lasers and laser devices.

  10. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  11. Isogeometric finite element data structures based on Bézier extraction of T-splines

    NARCIS (Netherlands)

    Scott, M.A.; Borden, M.J.; Verhoosel, C.V.; Sederberg, T.W.; Hughes, T.J.R.

    2011-01-01

    We develop finite element data structures for T-splines based on Bézier extraction generalizing our previous work for NURBS. As in traditional finite element analysis, the extracted Bézier elements are defined in terms of a fixed set of polynomial basis functions, the so-called Bernstein basis. The

  12. Evaluation of a multiple linear regression model and SARIMA model in forecasting heat demand for district heating system

    International Nuclear Information System (INIS)

    Fang, Tingting; Lahdelma, Risto

    2016-01-01

    Highlights: • Social factor is considered for the linear regression models besides weather file. • Simultaneously optimize all the coefficients for linear regression models. • SARIMA combined with linear regression is used to forecast the heat demand. • The accuracy for both linear regression and time series models are evaluated. - Abstract: Forecasting heat demand is necessary for production and operation planning of district heating (DH) systems. In this study we first propose a simple regression model where the hourly outdoor temperature and wind speed forecast the heat demand. Weekly rhythm of heat consumption as a social component is added to the model to significantly improve the accuracy. The other type of model is the seasonal autoregressive integrated moving average (SARIMA) model with exogenous variables as a combination to take weather factors, and the historical heat consumption data as depending variables. One outstanding advantage of the model is that it peruses the high accuracy for both long-term and short-term forecast by considering both exogenous factors and time series. The forecasting performance of both linear regression models and time series model are evaluated based on real-life heat demand data for the city of Espoo in Finland by out-of-sample tests for the last 20 full weeks of the year. The results indicate that the proposed linear regression model (T168h) using 168-h demand pattern with midweek holidays classified as Saturdays or Sundays gives the highest accuracy and strong robustness among all the tested models based on the tested forecasting horizon and corresponding data. Considering the parsimony of the input, the ease of use and the high accuracy, the proposed T168h model is the best in practice. The heat demand forecasting model can also be developed for individual buildings if automated meter reading customer measurements are available. This would allow forecasting the heat demand based on more accurate heat consumption

  13. Optimal quadrature rules for odd-degree spline spaces and their application to tensor-product-based isogeometric analysis

    KAUST Repository

    Barton, Michael

    2016-03-14

    We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.

  14. Optimal quadrature rules for odd-degree spline spaces and their application to tensor-product-based isogeometric analysis

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2016-01-01

    We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.

  15. Validation of regression models for nitrate concentrations in the upper groundwater in sandy soils

    International Nuclear Information System (INIS)

    Sonneveld, M.P.W.; Brus, D.J.; Roelsma, J.

    2010-01-01

    For Dutch sandy regions, linear regression models have been developed that predict nitrate concentrations in the upper groundwater on the basis of residual nitrate contents in the soil in autumn. The objective of our study was to validate these regression models for one particular sandy region dominated by dairy farming. No data from this area were used for calibrating the regression models. The model was validated by additional probability sampling. This sample was used to estimate errors in 1) the predicted areal fractions where the EU standard of 50 mg l -1 is exceeded for farms with low N surpluses (ALT) and farms with higher N surpluses (REF); 2) predicted cumulative frequency distributions of nitrate concentration for both groups of farms. Both the errors in the predicted areal fractions as well as the errors in the predicted cumulative frequency distributions indicate that the regression models are invalid for the sandy soils of this study area. - This study indicates that linear regression models that predict nitrate concentrations in the upper groundwater using residual soil N contents should be applied with care.

  16. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    KAUST Repository

    Cortes, Adriano Mauricio; Dalcin, Lisandro; Sarmiento, Adel; Collier, N.; Calo, Victor M.

    2016-01-01

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence

  17. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models

    Science.gov (United States)

    Caimmi, R.

    2011-08-01

    Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both

  18. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  19. Multivariate Hermite interpolation on scattered point sets using tensor-product expo-rational B-splines

    Science.gov (United States)

    Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter

    2011-12-01

    At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS

  20. Discrete quintic spline for boundary value problem in plate deflation theory

    Science.gov (United States)

    Wong, Patricia J. Y.

    2017-07-01

    We propose a numerical scheme for a fourth-order boundary value problem arising from plate deflation theory. The scheme involves a discrete quintic spline, and it is of order 4 if a parameter takes a specific value, else it is of order 2. We also present a well known numerical example to illustrate the efficiency of our method as well as to compare with other numerical methods proposed in the literature.

  1. Forecasting daily meteorological time series using ARIMA and regression models

    Science.gov (United States)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  2. Replica analysis of overfitting in regression models for time-to-event data

    Science.gov (United States)

    Coolen, A. C. C.; Barrett, J. E.; Paga, P.; Perez-Vicente, C. J.

    2017-09-01

    Overfitting, which happens when the number of parameters in a model is too large compared to the number of data points available for determining these parameters, is a serious and growing problem in survival analysis. While modern medicine presents us with data of unprecedented dimensionality, these data cannot yet be used effectively for clinical outcome prediction. Standard error measures in maximum likelihood regression, such as p-values and z-scores, are blind to overfitting, and even for Cox’s proportional hazards model (the main tool of medical statisticians), one finds in literature only rules of thumb on the number of samples required to avoid overfitting. In this paper we present a mathematical theory of overfitting in regression models for time-to-event data, which aims to increase our quantitative understanding of the problem and provide practical tools with which to correct regression outcomes for the impact of overfitting. It is based on the replica method, a statistical mechanical technique for the analysis of heterogeneous many-variable systems that has been used successfully for several decades in physics, biology, and computer science, but not yet in medical statistics. We develop the theory initially for arbitrary regression models for time-to-event data, and verify its predictions in detail for the popular Cox model.

  3. Value of the New Spline QTc Formula in Adjusting for Pacing-Induced Changes in Heart Rate

    Directory of Open Access Journals (Sweden)

    Hirmand Nouraei

    2018-01-01

    Full Text Available Aims. To determine whether a new QTc calculation based on a Spline fit model derived and validated from a large population remained stable in the same individual across a range of heart rates (HRs. Second, to determine whether this formula incorporating QRS duration can be of value in QT measurement, compared to direct measurement of the JT interval, during ventricular pacing. Methods. Individuals (N=30; 14 males aged 51.9 ± 14.3 years were paced with decremental atrial followed by decremental ventricular pacing. Results. The new QTc changed minimally with shorter RR intervals, poorly fit even a linear relationship, and did not fit a second-order polynomial. In contrast, the Bazett formula (QTcBZT showed a steep and marked increase in QTc with shorter RR intervals. For atrial pacing data, QTcBZT was fit best by a second-order polynomial and demonstrated a dramatic increase in QTc with progressively shorter RR intervals. For ventricular pacing, the new QTc minus QRS duration did not meaningfully change with HR in contrast to the HR dependency of QTcBZT and JT interval. Conclusion. The new QT correction formula is minimally impacted by HR acceleration induced by atrial or ventricular pacing. The Spline QTc minus QRS duration is an excellent method to estimate QTc in ventricular paced complexes.

  4. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  5. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    Science.gov (United States)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  6. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  7. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  8. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    Science.gov (United States)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  9. Thin-plate spline analysis of the cranial base in subjects with Class III malocclusion.

    Science.gov (United States)

    Singh, G D; McNamara, J A; Lozanoff, S

    1997-08-01

    The role of the cranial base in the emergence of Class III malocclusion is not fully understood. This study determines deformations that contribute to a Class III cranial base morphology, employing thin-plate spline analysis on lateral cephalographs. A total of 73 children of European-American descent aged between 5 and 11 years of age with Class III malocclusion were compared with an equivalent group of subjects with a normal, untreated, Class I molar occlusion. The cephalographs were traced, checked and subdivided into seven age- and sex-matched groups. Thirteen points on the cranial base were identified and digitized. The datasets were scaled to an equivalent size, and statistical analysis indicated significant differences between average Class I and Class III cranial base morphologies for each group. Thin-plate spline analysis indicated that both affine (uniform) and non-affine transformations contribute toward the total spline for each average cranial base morphology at each age group analysed. For non-affine transformations, Partial warps 10, 8 and 7 had high magnitudes, indicating large-scale deformations affecting Bolton point, basion, pterygo-maxillare, Ricketts' point and articulare. In contrast, high eigenvalues associated with Partial warps 1-3, indicating localized shape changes, were found at tuberculum sellae, sella, and the frontonasomaxillary suture. It is concluded that large spatial-scale deformations affect the occipital complex of the cranial base and sphenoidal region, in combination with localized distortions at the frontonasal suture. These deformations may contribute to reduced orthocephalization or deficient flattening of the cranial base antero-posteriorly that, in turn, leads to the formation of a Class III malocclusion.

  10. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.

    Science.gov (United States)

    Kawashima, Issaku; Kumano, Hiroaki

    2017-01-01

    Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  11. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Issaku Kawashima

    2017-07-01

    Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  12. Reconstruction of missing daily streamflow data using dynamic regression models

    Science.gov (United States)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  13. Hierarchical and successive approximate registration of the non-rigid medical image based on thin-plate splines

    Science.gov (United States)

    Hu, Jinyan; Li, Li; Yang, Yunfeng

    2017-06-01

    The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.

  14. Detection of Outliers in Regression Model for Medical Data

    Directory of Open Access Journals (Sweden)

    Stephen Raj S

    2017-07-01

    Full Text Available In regression analysis, an outlier is an observation for which the residual is large in magnitude compared to other observations in the data set. The detection of outliers and influential points is an important step of the regression analysis. Outlier detection methods have been used to detect and remove anomalous values from data. In this paper, we detect the presence of outliers in simple linear regression models for medical data set. Chatterjee and Hadi mentioned that the ordinary residuals are not appropriate for diagnostic purposes; a transformed version of them is preferable. First, we investigate the presence of outliers based on existing procedures of residuals and standardized residuals. Next, we have used the new approach of standardized scores for detecting outliers without the use of predicted values. The performance of the new approach was verified with the real-life data.

  15. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  16. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  17. On pseudo-values for regression analysis in competing risks models

    DEFF Research Database (Denmark)

    Graw, F; Gerds, Thomas Alexander; Schumacher, M

    2009-01-01

    For regression on state and transition probabilities in multi-state models Andersen et al. (Biometrika 90:15-27, 2003) propose a technique based on jackknife pseudo-values. In this article we analyze the pseudo-values suggested for competing risks models and prove some conjectures regarding their...

  18. Modeling and prediction of Turkey's electricity consumption using Support Vector Regression

    International Nuclear Information System (INIS)

    Kavaklioglu, Kadir

    2011-01-01

    Support Vector Regression (SVR) methodology is used to model and predict Turkey's electricity consumption. Among various SVR formalisms, ε-SVR method was used since the training pattern set was relatively small. Electricity consumption is modeled as a function of socio-economic indicators such as population, Gross National Product, imports and exports. In order to facilitate future predictions of electricity consumption, a separate SVR model was created for each of the input variables using their current and past values; and these models were combined to yield consumption prediction values. A grid search for the model parameters was performed to find the best ε-SVR model for each variable based on Root Mean Square Error. Electricity consumption of Turkey is predicted until 2026 using data from 1975 to 2006. The results show that electricity consumption can be modeled using Support Vector Regression and the models can be used to predict future electricity consumption. (author)

  19. Cluster regression model and level fluctuation features of Van Lake, Turkey

    Directory of Open Access Journals (Sweden)

    Z. Şen

    1999-02-01

    Full Text Available Lake water levels change under the influences of natural and/or anthropogenic environmental conditions. Among these influences are the climate change, greenhouse effects and ozone layer depletions which are reflected in the hydrological cycle features over the lake drainage basins. Lake levels are among the most significant hydrological variables that are influenced by different atmospheric and environmental conditions. Consequently, lake level time series in many parts of the world include nonstationarity components such as shifts in the mean value, apparent or hidden periodicities. On the other hand, many lake level modeling techniques have a stationarity assumption. The main purpose of this work is to develop a cluster regression model for dealing with nonstationarity especially in the form of shifting means. The basis of this model is the combination of transition probability and classical regression technique. Both parts of the model are applied to monthly level fluctuations of Lake Van in eastern Turkey. It is observed that the cluster regression procedure does preserve the statistical properties and the transitional probabilities that are indistinguishable from the original data.Key words. Hydrology (hydrologic budget; stochastic processes · Meteorology and atmospheric dynamics (ocean-atmosphere interactions

  20. Cluster regression model and level fluctuation features of Van Lake, Turkey

    Directory of Open Access Journals (Sweden)

    Z. Şen

    Full Text Available Lake water levels change under the influences of natural and/or anthropogenic environmental conditions. Among these influences are the climate change, greenhouse effects and ozone layer depletions which are reflected in the hydrological cycle features over the lake drainage basins. Lake levels are among the most significant hydrological variables that are influenced by different atmospheric and environmental conditions. Consequently, lake level time series in many parts of the world include nonstationarity components such as shifts in the mean value, apparent or hidden periodicities. On the other hand, many lake level modeling techniques have a stationarity assumption. The main purpose of this work is to develop a cluster regression model for dealing with nonstationarity especially in the form of shifting means. The basis of this model is the combination of transition probability and classical regression technique. Both parts of the model are applied to monthly level fluctuations of Lake Van in eastern Turkey. It is observed that the cluster regression procedure does preserve the statistical properties and the transitional probabilities that are indistinguishable from the original data.

    Key words. Hydrology (hydrologic budget; stochastic processes · Meteorology and atmospheric dynamics (ocean-atmosphere interactions