WorldWideScience

Sample records for spline regression models

  1. Flexible regression models with cubic splines.

    Science.gov (United States)

    Durrleman, S; Simon, R

    1989-05-01

    We describe the use of cubic splines in regression models to represent the relationship between the response variable and a vector of covariates. This simple method can help prevent the problems that result from inappropriate linearity assumptions. We compare restricted cubic spline regression to non-parametric procedures for characterizing the relationship between age and survival in the Stanford Heart Transplant data. We also provide an illustrative example in cancer therapeutics.

  2. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  3. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather

  4. Random regression analyses using B-splines to model growth of Australian Angus cattle

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2005-09-01

    Full Text Available Abstract Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error.

  5. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines

    Directory of Open Access Journals (Sweden)

    Laura M. Grajeda

    2016-01-01

    Full Text Available Abstract Background Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. Methods We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Results Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001 when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001 and slopes (p < 0.001 of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001, which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and

  6. The Norwegian Healthier Goats program--modeling lactation curves using a multilevel cubic spline regression model.

    Science.gov (United States)

    Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S

    2014-07-01

    In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014

  7. Application of Semiparametric Spline Regression Model in Analyzing Factors that In uence Population Density in Central Java

    Science.gov (United States)

    Sumantari, Y. D.; Slamet, I.; Sugiyanto

    2017-06-01

    Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.

  8. PM10 modeling in the Oviedo urban area (Northern Spain) by using multivariate adaptive regression splines

    Science.gov (United States)

    Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza

    2014-10-01

    The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of

  9. ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES

    Directory of Open Access Journals (Sweden)

    Joel Domínguez Viveros

    2015-04-01

    Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year – season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.

  10. Air quality modeling in the Oviedo urban area (NW Spain) by using multivariate adaptive regression splines.

    Science.gov (United States)

    Nieto, P J García; Antón, J C Álvarez; Vilán, J A Vilán; García-Gonzalo, E

    2015-05-01

    The aim of this research work is to build a regression model of air quality by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (northern Spain) at a local scale. To accomplish the objective of this study, the experimental data set made up of nitrogen oxides (NO x ), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3), and dust (PM10) was collected over 3 years (2006-2008). The US National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the MARS technique, conclusions of this research work are exposed.

  11. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  12. Evaluation of Logistic Regression and Multivariate Adaptive Regression Spline Models for Groundwater Potential Mapping Using R and GIS

    Directory of Open Access Journals (Sweden)

    Soyoung Park

    2017-07-01

    Full Text Available This study mapped and analyzed groundwater potential using two different models, logistic regression (LR and multivariate adaptive regression splines (MARS, and compared the results. A spatial database was constructed for groundwater well data and groundwater influence factors. Groundwater well data with a high potential yield of ≥70 m3/d were extracted, and 859 locations (70% were used for model training, whereas the other 365 locations (30% were used for model validation. We analyzed 16 groundwater influence factors including altitude, slope degree, slope aspect, plan curvature, profile curvature, topographic wetness index, stream power index, sediment transport index, distance from drainage, drainage density, lithology, distance from fault, fault density, distance from lineament, lineament density, and land cover. Groundwater potential maps (GPMs were constructed using LR and MARS models and tested using a receiver operating characteristics curve. Based on this analysis, the area under the curve (AUC for the success rate curve of GPMs created using the MARS and LR models was 0.867 and 0.838, and the AUC for the prediction rate curve was 0.836 and 0.801, respectively. This implies that the MARS model is useful and effective for groundwater potential analysis in the study area.

  13. A Parametric Model of the LARCODEMS Heavy Media Separator by Means of Multivariate Adaptive Regression Splines

    Directory of Open Access Journals (Sweden)

    Mario Menéndez Álvarez

    2017-06-01

    Full Text Available Modeling of a cylindrical heavy media separator has been conducted in order to predict its optimum operating parameters. As far as it is known by the authors, this is the first application in the literature. The aim of the present research is to predict the separation efficiency based on the adjustment of the device’s dimensions and media flow rates. A variety of heavy media separators exist that are extensively used to separate particles by density. There is a growing importance in their application in the recycling sector. The cylindrical variety is reported to be the most suited for processing a large range of particle sizes, but optimizing its operating parameters remains to be documented. The multivariate adaptive regression splines methodology has been applied in order to predict the separation efficiencies using, as inputs, the device dimension and media flow rate variables. The results obtained show that it is possible to predict the device separation efficiency according to laboratory experiments performed and, therefore, forecast results obtainable with different operating conditions.

  14. Transport modeling and multivariate adaptive regression splines for evaluating performance of ASR systems in freshwater aquifers

    Science.gov (United States)

    Forghani, Ali; Peralta, Richard C.

    2017-10-01

    The study presents a procedure using solute transport and statistical models to evaluate the performance of aquifer storage and recovery (ASR) systems designed to earn additional water rights in freshwater aquifers. The recovery effectiveness (REN) index quantifies the performance of these ASR systems. REN is the proportion of the injected water that the same ASR well can recapture during subsequent extraction periods. To estimate REN for individual ASR wells, the presented procedure uses finely discretized groundwater flow and contaminant transport modeling. Then, the procedure uses multivariate adaptive regression splines (MARS) analysis to identify the significant variables affecting REN, and to identify the most recovery-effective wells. Achieving REN values close to 100% is the desire of the studied 14-well ASR system operator. This recovery is feasible for most of the ASR wells by extracting three times the injectate volume during the same year as injection. Most of the wells would achieve RENs below 75% if extracting merely the same volume as they injected. In other words, recovering almost all the same water molecules that are injected requires having a pre-existing water right to extract groundwater annually. MARS shows that REN most significantly correlates with groundwater flow velocity, or hydraulic conductivity and hydraulic gradient. MARS results also demonstrate that maximizing REN requires utilizing the wells located in areas with background Darcian groundwater velocities less than 0.03 m/d. The study also highlights the superiority of MARS over regular multiple linear regressions to identify the wells that can provide the maximum REN. This is the first reported application of MARS for evaluating performance of an ASR system in fresh water aquifers.

  15. Multivariate adaptive regression splines and neural network models for prediction of pile drivability

    Directory of Open Access Journals (Sweden)

    Wengang Zhang

    2016-01-01

    Full Text Available Piles are long, slender structural elements used to transfer the loads from the superstructure through weak strata onto stiffer soils or rocks. For driven piles, the impact of the piling hammer induces compression and tension stresses in the piles. Hence, an important design consideration is to check that the strength of the pile is sufficient to resist the stresses caused by the impact of the pile hammer. Due to its complexity, pile drivability lacks a precise analytical solution with regard to the phenomena involved. In situations where measured data or numerical hypothetical results are available, neural networks stand out in mapping the nonlinear interactions and relationships between the system's predictors and dependent responses. In addition, unlike most computational tools, no mathematical relationship assumption between the dependent and independent variables has to be made. Nevertheless, neural networks have been criticized for their long trial-and-error training process since the optimal configuration is not known a priori. This paper investigates the use of a fairly simple nonparametric regression algorithm known as multivariate adaptive regression splines (MARS, as an alternative to neural networks, to approximate the relationship between the inputs and dependent response, and to mathematically interpret the relationship between the various parameters. In this paper, the Back propagation neural network (BPNN and MARS models are developed for assessing pile drivability in relation to the prediction of the Maximum compressive stresses (MCS, Maximum tensile stresses (MTS, and Blow per foot (BPF. A database of more than four thousand piles is utilized for model development and comparative performance between BPNN and MARS predictions.

  16. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-07-01

    Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.

  17. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  18. Estimates of genetic parameters for Holstein cows for test-day yield traits with a random regression cubic spline model.

    Science.gov (United States)

    DeGroot, B J; Keown, J F; Van Vleck, L D; Kachman, S D

    2007-06-30

    Genetic parameters were estimated with restricted maximum likelihood for individual test-day milk, fat, and protein yields and somatic cell scores with a random regression cubic spline model. Test-day records of Holstein cows that calved from 1994 through early 1999 were obtained from Dairy Records Management Systems in Raleigh, North Carolina, for the analysis. Estimates of heritability for individual test-days and estimates of genetic and phenotypic correlations between test-days were obtained from estimates of variances and covariances from the cubic spline analysis. Estimates were calculated of genetic parameters for the averages of the test days within each of the ten 30-day test intervals. The model included herd test-day, age at first calving, and bovine somatropin treatment as fixed factors. Cubic splines were fitted for the overall lactation curve and for random additive genetic and permanent environmental effects, with five predetermined knots or four intervals between days 0, 50, 135, 220, and 305. Estimates of heritability for lactation one ranged from 0.10 to 0.15, 0.06 to 0.10, 0.09 to 0.15, and 0.02 to 0.06 for test-day one to test-day 10 for milk, fat, and protein yields and somatic cell scores, respectively. Estimates of heritability were greater in lactations two and three. Estimates of heritability increased over the course of the lactation. Estimates of genetic and phenotypic correlations were smaller for test-days further apart.

  19. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    Science.gov (United States)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  20. Groundwater potential mapping using C5.0, random forest, and multivariate adaptive regression spline models in GIS.

    Science.gov (United States)

    Golkarian, Ali; Naghibi, Seyed Amir; Kalantar, Bahareh; Pradhan, Biswajeet

    2018-02-17

    Ever increasing demand for water resources for different purposes makes it essential to have better understanding and knowledge about water resources. As known, groundwater resources are one of the main water resources especially in countries with arid climatic condition. Thus, this study seeks to provide groundwater potential maps (GPMs) employing new algorithms. Accordingly, this study aims to validate the performance of C5.0, random forest (RF), and multivariate adaptive regression splines (MARS) algorithms for generating GPMs in the eastern part of Mashhad Plain, Iran. For this purpose, a dataset was produced consisting of spring locations as indicator and groundwater-conditioning factors (GCFs) as input. In this research, 13 GCFs were selected including altitude, slope aspect, slope angle, plan curvature, profile curvature, topographic wetness index (TWI), slope length, distance from rivers and faults, rivers and faults density, land use, and lithology. The mentioned dataset was divided into two classes of training and validation with 70 and 30% of the springs, respectively. Then, C5.0, RF, and MARS algorithms were employed using R statistical software, and the final values were transformed into GPMs. Finally, two evaluation criteria including Kappa and area under receiver operating characteristics curve (AUC-ROC) were calculated. According to the findings of this research, MARS had the best performance with AUC-ROC of 84.2%, followed by RF and C5.0 algorithms with AUC-ROC values of 79.7 and 77.3%, respectively. The results indicated that AUC-ROC values for the employed models are more than 70% which shows their acceptable performance. As a conclusion, the produced methodology could be used in other geographical areas. GPMs could be used by water resource managers and related organizations to accelerate and facilitate water resource exploitation.

  1. Drought forecasting in eastern Australia using multivariate adaptive regression spline, least square support vector machine and M5Tree model

    Science.gov (United States)

    Deo, Ravinesh C.; Kisi, Ozgur; Singh, Vijay P.

    2017-02-01

    Drought forecasting using standardized metrics of rainfall is a core task in hydrology and water resources management. Standardized Precipitation Index (SPI) is a rainfall-based metric that caters for different time-scales at which the drought occurs, and due to its standardization, is well-suited for forecasting drought at different periods in climatically diverse regions. This study advances drought modelling using multivariate adaptive regression splines (MARS), least square support vector machine (LSSVM), and M5Tree models by forecasting SPI in eastern Australia. MARS model incorporated rainfall as mandatory predictor with month (periodicity), Southern Oscillation Index, Pacific Decadal Oscillation Index and Indian Ocean Dipole, ENSO Modoki and Nino 3.0, 3.4 and 4.0 data added gradually. The performance was evaluated with root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (r2). Best MARS model required different input combinations, where rainfall, sea surface temperature and periodicity were used for all stations, but ENSO Modoki and Pacific Decadal Oscillation indices were not required for Bathurst, Collarenebri and Yamba, and the Southern Oscillation Index was not required for Collarenebri. Inclusion of periodicity increased the r2 value by 0.5-8.1% and reduced RMSE by 3.0-178.5%. Comparisons showed that MARS superseded the performance of the other counterparts for three out of five stations with lower MAE by 15.0-73.9% and 7.3-42.2%, respectively. For the other stations, M5Tree was better than MARS/LSSVM with lower MAE by 13.8-13.4% and 25.7-52.2%, respectively, and for Bathurst, LSSVM yielded more accurate result. For droughts identified by SPI ≤ - 0.5, accurate forecasts were attained by MARS/M5Tree for Bathurst, Yamba and Peak Hill, whereas for Collarenebri and Barraba, M5Tree was better than LSSVM/MARS. Seasonal analysis revealed disparate results where MARS/M5Tree was better than LSSVM. The results highlight the

  2. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    Science.gov (United States)

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  3. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    Science.gov (United States)

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Segmented Regression Based on B-Splines with Solved Examples

    Directory of Open Access Journals (Sweden)

    Miloš Kaňka

    2015-12-01

    Full Text Available The subject of the paper is segmented linear, quadratic, and cubic regression based on B-spline basis functions. In this article we expose the formulas for the computation of B-splines of order one, two, and three that is needed to construct linear, quadratic, and cubic regression. We list some interesting properties of these functions. For a clearer understanding we give the solutions of a couple of elementary exercises regarding these functions.

  5. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  6. Application of random forest time series, support vector regression and multivariate adaptive regression splines models in prediction of snowfall (a case study of Alvand in the middle Zagros, Iran)

    Science.gov (United States)

    Hamidi, Omid; Tapak, Leili; Abbasi, Hamed; Maryanaji, Zohreh

    2017-10-01

    We have conducted a case study to investigate the performance of support vector machine, multivariate adaptive regression splines, and random forest time series methods in snowfall modeling. These models were applied to a data set of monthly snowfall collected during six cold months at Hamadan Airport sample station located in the Zagros Mountain Range in Iran. We considered monthly data of snowfall from 1981 to 2008 during the period from October/November to April/May as the training set and the data from 2009 to 2015 as the testing set. The root mean square errors (RMSE), mean absolute errors (MAE), determination coefficient (R 2), coefficient of efficiency (E%), and intra-class correlation coefficient (ICC) statistics were used as evaluation criteria. Our results indicated that the random forest time series model outperformed the support vector machine and multivariate adaptive regression splines models in predicting monthly snowfall in terms of several criteria. The RMSE, MAE, R 2, E, and ICC for the testing set were 7.84, 5.52, 0.92, 0.89, and 0.93, respectively. The overall results indicated that the random forest time series model could be successfully used to estimate monthly snowfall values. Moreover, the support vector machine model showed substantial performance as well, suggesting it may also be applied to forecast snowfall in this area.

  7. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    Science.gov (United States)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  8. Natural Cubic Spline Regression Modeling Followed by Dynamic Network Reconstruction for the Identification of Radiation-Sensitivity Gene Association Networks from Time-Course Transcriptome Data.

    Science.gov (United States)

    Michna, Agata; Braselmann, Herbert; Selmansberger, Martin; Dietz, Anne; Hess, Julia; Gomolka, Maria; Hornhardt, Sabine; Blüthgen, Nils; Zitzelsberger, Horst; Unger, Kristian

    2016-01-01

    Gene expression time-course experiments allow to study the dynamics of transcriptomic changes in cells exposed to different stimuli. However, most approaches for the reconstruction of gene association networks (GANs) do not propose prior-selection approaches tailored to time-course transcriptome data. Here, we present a workflow for the identification of GANs from time-course data using prior selection of genes differentially expressed over time identified by natural cubic spline regression modeling (NCSRM). The workflow comprises three major steps: 1) the identification of differentially expressed genes from time-course expression data by employing NCSRM, 2) the use of regularized dynamic partial correlation as implemented in GeneNet to infer GANs from differentially expressed genes and 3) the identification and functional characterization of the key nodes in the reconstructed networks. The approach was applied on a time-resolved transcriptome data set of radiation-perturbed cell culture models of non-tumor cells with normal and increased radiation sensitivity. NCSRM detected significantly more genes than another commonly used method for time-course transcriptome analysis (BETR). While most genes detected with BETR were also detected with NCSRM the false-detection rate of NCSRM was low (3%). The GANs reconstructed from genes detected with NCSRM showed a better overlap with the interactome network Reactome compared to GANs derived from BETR detected genes. After exposure to 1 Gy the normal sensitive cells showed only sparse response compared to cells with increased sensitivity, which exhibited a strong response mainly of genes related to the senescence pathway. After exposure to 10 Gy the response of the normal sensitive cells was mainly associated with senescence and that of cells with increased sensitivity with apoptosis. We discuss these results in a clinical context and underline the impact of senescence-associated pathways in acute radiation response of normal

  9. Modeling relationship between mean years of schooling and household expenditure at Central Sulawesi using constrained B-splines (COBS) in quantile regression

    Science.gov (United States)

    Hudoyo, Luhur Partomo; Andriyana, Yudhie; Handoko, Budhi

    2017-03-01

    Quantile regression illustrates the distribution of conditional variable responses to various quantile desired values. Each quantile characterizes a certain point (center or tail) of a conditional distribution. This analysis is very useful for asymmetric conditional distribution, e.g. solid at the tail of the distribution, the truncated distribution and existence of outliers. One approach nonparametric method of predicting the conditional quantile objective function is Constrained B-Splines (COBS). COBS is a smoothing technique to accommodate the addition of constraints such as monotonicity, convexity and periodicity. In this study, we will change the minimum conditional quantile objective function in COBS into a linear programming problem. Linear programming problem is defined as the problem of minimizing and maximizing a linear function subject to linear constraints. The constraints may be equalities or inequalities. This research will discuss the relationship between education (mean years of schooling) and economic (household expenditure) levels at Central Sulawesi Province in 2014 which household level data provide more systematic evidence on positive relationship. So monotonicity (increasing) constraints will be used in COBS quantile regression model.

  10. Semisupervised feature selection via spline regression for video semantic recognition.

    Science.gov (United States)

    Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang

    2015-02-01

    To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.

  11. Spline interpolations besides wood model widely used in lactation

    Science.gov (United States)

    Korkmaz, Mehmet

    2017-04-01

    In this study, for lactation curve, spline interpolations, alternative modeling passing through exactly all data points with respect to widely used Wood model applied to lactation data were be discussed. These models are linear spline, quadratic spline and cubic spline. The observed and estimated values according to spline interpolations and Wood model were given with their Error Sum of Squares and also the lactation curves of spline interpolations and widely used Wood model were shown on the same graph. Thus, the differences have been observed. The estimates for some intermediate values were done by using spline interpolations and Wood model. By using spline interpolations, the estimates of intermediate values could be made more precise. Furthermore, by using spline interpolations, the predicted values for missing or incorrect observation were very successful according to the values of Wood model. By using spline interpolations, new ideas and interpretations in addition to the information of the well-known classical analysis were shown to the investigators.

  12. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    Science.gov (United States)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  13. LIMIT STRESS SPLINE MODELS FOR GRP COMPOSITES

    African Journals Online (AJOL)

    ES OBE

    Department of Mechanical Engineering, Anambra State. University of Science and Technology, Uli ... 12 were established. The optimization of quadratic and cubic models by gradient search optimization gave the critical strain as 0.024, .... 2.2.1 Derivation of Cubic Spline Equation. The basic assumptions to be used are: 1.

  14. Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression

    Science.gov (United States)

    Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.

    2015-12-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.

  15. A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.

    Science.gov (United States)

    Li, Chin-Shang; Tu, Wanzhu

    2007-05-01

    In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.

  16. Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...

    African Journals Online (AJOL)

    Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.

  17. Modelling the trend of bovine spongiform encephalopathy prevalence in France: Use of restricted cubic spline regression in age-period-cohort models to estimate the efficiency of control measures.

    Science.gov (United States)

    Sala, Carole; Morignat, Eric; Ducrot, Christian; Calavas, Didier

    2009-07-01

    An age-period-cohort (APC) analysis was used to assess the trend in prevalence of bovine spongiform encephalopathy (BSE) in France over time in relation to the control measures adopted since onset of the epidemic. Restricted cubic regression splines were used to model the functional forms of the non-linear effects of age at screening, birth cohort and date of diagnosis of the tested animals. The data of the 2001-2007 period of surveillance was analysed using 1-year categorisation. A categorical analysis was performed as control to check the accuracy of the sets of knots in the spline models, which were selected according to the Akaike Information Criterion (AIC). Knot selection was based on a priori knowledge of the disease and the dates of implementation of the five main BSE control measures. It was assumed that disease prevalence was a function of exposure to BSE and that changes in the exposure of cattle to BSE were mainly due to the control measures. The effects of the five main control measures were discussed in relation to the trend in BSE risk for the successive birth cohorts. The six selected models confirmed that all measures participated in disease control. However, characterization of the respective effect of individual measures was not straightforward due to the very low disease prevalence, incompletely tested cohorts and probably cumulative and overlapping effects of successive measures. The ban of importation of meat and bone meal (MBM) from the UK and the ban of use of MBM in bovines were insufficient to control the epidemic. The decline in the BSE epidemic more likely originated from implementation of the ban of MBM use in all ruminants in 1994, whose effect was probably reinforced by the evolution in perception of the BSE risk following evidence of BSE transmission to humans. Finally, the respective effects of the last two measures (prohibition of the use of specific risk material in 1996 and total MBM ban in 2000) could not be characterized as

  18. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    -parametric regression based on kernel estimators. This approach combines the virtues of the DEA and the SFA, while avoiding their drawbacks: it avoids the specification of a functional form and at the same time accounts for statistical noise. More recently, this approach was used by Henderson and Simar (2005...... is criticised, because it cannot account for statistical noise such as random production shocks and measurement errors, which are inherent in more or less all production data sets. In contrast, the SFA is criticised, because it requires the specification of a functional form, which involves the risk......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...

  19. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    The estimation of the technical efficiency comprises a vast literature in the field of applied production economics. There are two predominant approaches: the non-parametric and non-stochastic Data Envelopment Analysis (DEA) and the parametric Stochastic Frontier Analysis (SFA). The DEA...... of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non-parametric......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...

  20. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age

    Directory of Open Access Journals (Sweden)

    Marko Wilke

    2018-02-01

    Full Text Available This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1–75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender as well as technical (field strength, data quality predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php. Keywords: MRI template creation, Multivariate adaptive regression splines, DARTEL, Structural MRI

  1. USING MULTIVARIATE ADAPTIVE REGRESSION SPLINE AND ARTIFICIAL NEURAL NETWORK TO SIMULATE URBANIZATION IN MUMBAI, INDIA

    Directory of Open Access Journals (Sweden)

    M. Ahmadlou

    2015-12-01

    Full Text Available Land use change (LUC models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS, and a global parametric model called artificial neural network (ANN to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM and 2010 (ETM+ were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  2. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    Science.gov (United States)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  3. Input point distribution for regular stem form spline modeling

    Directory of Open Access Journals (Sweden)

    Karel Kuželka

    2015-04-01

    Full Text Available Aim of study: To optimize an interpolation method and distribution of measured diameters to represent regular stem form of coniferous trees using a set of discrete points. Area of study: Central-Bohemian highlands, Czech Republic; a region that represents average stand conditions of production forests of Norway spruce (Picea abies [L.] Karst. in central Europe Material and methods: The accuracy of stem curves modeled using natural cubic splines from a set of measured diameters was evaluated for 85 closely measured stems of Norway spruce using five statistical indicators and compared to the accuracy of three additional models based on different spline types selected for their ability to represent stem curves. The optimal positions to measure diameters were identified using an aggregate objective function approach. Main results: The optimal positions of the input points vary depending on the properties of each spline type. If the optimal input points for each spline are used, then all spline types are able to give reasonable results with higher numbers of input points. The commonly used natural cubic spline was outperformed by other spline types. The lowest errors occur by interpolating the points using the Catmull-Rom spline, which gives accurate and unbiased volume estimates, even with only five input points. Research highlights: The study contributes to more accurate representation of stem form and therefore more accurate estimation of stem volume using data obtained from terrestrial imagery or other close-range remote sensing methods.

  4. Identification of Hammerstein models with cubic spline nonlinearities.

    Science.gov (United States)

    Dempsey, Erika J; Westwick, David T

    2004-02-01

    This paper considers the use of cubic splines, instead of polynomials, to represent the static nonlinearities in block structured models. It introduces a system identification algorithm for the Hammerstein structure, a static nonlinearity followed by a linear filter, where cubic splines represent the static nonlinearity and the linear dynamics are modeled using a finite impulse response filter. The algorithm uses a separable least squares Levenberg-Marquardt optimization to identify Hammerstein cascades whose nonlinearities are modeled by either cubic splines or polynomials. These algorithms are compared in simulation, where the effects of variations in the input spectrum and distribution, and those of the measurement noise are examined. The two algorithms are used to fit Hammerstein models to stretch reflex electromyogram (EMG) data recorded from a spinal cord injured patient. The model with the cubic spline nonlinearity provides more accurate predictions of the reflex EMG than the polynomial based model, even in novel data.

  5. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  6. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    Science.gov (United States)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model

  7. Forecasting the daily power output of a grid-connected photovoltaic system based on multivariate adaptive regression splines

    International Nuclear Information System (INIS)

    Li, Yanting; He, Yong; Su, Yan; Shu, Lianjie

    2016-01-01

    Highlights: • Suggests a nonparametric model based on MARS for output power prediction. • Compare the MARS model with a wide variety of prediction models. • Show that the MARS model is able to provide an overall good performance in both the training and testing stages. - Abstract: Both linear and nonlinear models have been proposed for forecasting the power output of photovoltaic systems. Linear models are simple to implement but less flexible. Due to the stochastic nature of the power output of PV systems, nonlinear models tend to provide better forecast than linear models. Motivated by this, this paper suggests a fairly simple nonlinear regression model known as multivariate adaptive regression splines (MARS), as an alternative to forecasting of solar power output. The MARS model is a data-driven modeling approach without any assumption about the relationship between the power output and predictors. It maintains simplicity of the classical multiple linear regression (MLR) model while possessing the capability of handling nonlinearity. It is simpler in format than other nonlinear models such as ANN, k-nearest neighbors (KNN), classification and regression tree (CART), and support vector machine (SVM). The MARS model was applied on the daily output of a grid-connected 2.1 kW PV system to provide the 1-day-ahead mean daily forecast of the power output. The comparisons with a wide variety of forecast models show that the MARS model is able to provide reliable forecast performance.

  8. Predictors of anemia after bariatric surgery using multivariate adaptive regression splines.

    Science.gov (United States)

    Lee, Yi-Chih; Lee, Tian-Shyug; Lee, Wei-Jei; Lin, Yang-Chu; Lee, Chia-Ko; Liew, Phui-Ly

    2012-01-01

    Anemia is the most common nutritional deficiency after bariatric surgery. The predictors of anemia have not been clearly identified. This issue is useful for selecting an appropriate surgery procedure for morbid obesity. From December 2000 to October 2007, a retrospective study of 442 obese patients after bariatric surgery with two years' follow-up data was conducted. Anemia was defined by hemoglobin (Hb) under 13mg/dL in male and 11.5mg/dL in female. We analyzed the clinical information and laboratory data during the initial evaluation of patients referred to bariatric surgery for predictors of anemia development after surgery. All data were analyzed by using multivariate adaptive regression splines (MARS) method. Of the patients, the mean age was 30.8±8.6 years; mean BMI was 40.7±7.8kg/m2 and preoperative mean hemoglobin (Hb) was 13.7±1.5g/ dL. The prevalence of anemia increased from preoperatively 5.4% to 38.0% two years after surgery. Mean Hb was significantly lower in patients receiving gastric bypass than in restrictive type surgery (11.9mg/dL vs. 13.1mg/dL, p=0.040) two years after surgery. Besides, the preoperative optimal value of hemoglobin to predict future anemia in MARS model is 15.6mg/dL. The prevalence of anemia increased to 38.0% two years after bariatric surgery. We obtained an optimal preoperative value of hemoglobin 15.6mg/dL to predict postoperative anemia, which was important in preoperative assessment for bariatric surgery. Patients undergone gastric bypass surgery developed more severe anemia than gastric banding or sleeve gastrectomy.

  9. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Science.gov (United States)

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  10. Quantile regression and restricted cubic splines are useful for exploring relationships between continuous variables.

    Science.gov (United States)

    Marrie, Ruth Ann; Dawson, Neal V; Garland, Allan

    2009-05-01

    Ordinary least squares (OLS) regression, commonly called linear regression, is often used to assess, or adjust for, the relationship between a continuous independent variable and the mean of a continuous dependent variable, implicitly assuming a linear relationship between them. Linearity may not hold, however, and analyzing the mean of the dependent variable may not capture the full nature of such relationships. Our goal is to demonstrate how combined use of quantile regression and restricted cubic splines (RCS) can reveal the true nature and complexity of relationships between continuous variables. We provide a review of methodologic concepts, followed by two examples using real data sets. In the first example, we analyzed the relationship between cognition and disease duration in multiple sclerosis. In the second example, we analyzed the relationship between length of stay (LOS) and severity of illness in the intensive care unit (ICU). In both examples, quantile regression showed that the relationship between the variables of interest was heterogeneous. In the second example, RCS uncovered nonlinearity of the relationship between severity of illness and length of stay. Together, quantile regression and RCS are a powerful combination for exploring relationships between continuous variables.

  11. A Note on Penalized Regression Spline Estimation in the Secondary Analysis of Case-Control Data

    KAUST Repository

    Gazioglu, Suzan

    2013-05-25

    Primary analysis of case-control studies focuses on the relationship between disease (D) and a set of covariates of interest (Y, X). A secondary application of the case-control study, often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated due to the case-control sampling, and to avoid the biased sampling that arises from the design, it is typical to use the control data only. In this paper, we develop penalized regression spline methodology that uses all the data, and improves precision of estimation compared to using only the controls. A simulation study and an empirical example are used to illustrate the methodology.

  12. Modelling Childhood Growth Using Fractional Polynomials and Linear Splines

    Science.gov (United States)

    Tilling, Kate; Macdonald-Wallis, Corrie; Lawlor, Debbie A.; Hughes, Rachael A.; Howe, Laura D.

    2014-01-01

    Background There is increasing emphasis in medical research on modelling growth across the life course and identifying factors associated with growth. Here, we demonstrate multilevel models for childhood growth either as a smooth function (using fractional polynomials) or a set of connected linear phases (using linear splines). Methods We related parental social class to height from birth to 10 years of age in 5,588 girls from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multilevel fractional polynomial modelling identified the best-fitting model as being of degree 2 with powers of the square root of age, and the square root of age multiplied by the log of age. The multilevel linear spline model identified knot points at 3, 12 and 36 months of age. Results Both the fractional polynomial and linear spline models show an initially fast rate of growth, which slowed over time. Both models also showed that there was a disparity in length between manual and non-manual social class infants at birth, which decreased in magnitude until approximately 1 year of age and then increased. Conclusions Multilevel fractional polynomials give a more realistic smooth function, and linear spline models are easily interpretable. Each can be used to summarise individual growth trajectories and their relationships with individual-level exposures. PMID:25413651

  13. Flexible survival regression modelling

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben

    2009-01-01

    Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...

  14. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    Science.gov (United States)

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  15. Global Nonlinear Model Identification with Multivariate Splines

    NARCIS (Netherlands)

    De Visser, C.C.

    2011-01-01

    At present, model based control systems play an essential role in many aspects of modern society. Application areas of model based control systems range from food processing to medical imaging, and from process control in oil refineries to the flight control systems of modern aircraft. Central to a

  16. Quantitative structure-activity relationship study on BTK inhibitors by modified multivariate adaptive regression spline and CoMSIA methods.

    Science.gov (United States)

    Xu, A; Zhang, Y; Ran, T; Liu, H; Lu, S; Xu, J; Xiong, X; Jiang, Y; Lu, T; Chen, Y

    2015-01-01

    Bruton's tyrosine kinase (BTK) plays a crucial role in B-cell activation and development, and has emerged as a new molecular target for the treatment of autoimmune diseases and B-cell malignancies. In this study, two- and three-dimensional quantitative structure-activity relationship (2D and 3D-QSAR) analyses were performed on a series of pyridine and pyrimidine-based BTK inhibitors by means of genetic algorithm optimized multivariate adaptive regression spline (GA-MARS) and comparative molecular similarity index analysis (CoMSIA) methods. Here, we propose a modified MARS algorithm to develop 2D-QSAR models. The top ranked models showed satisfactory statistical results (2D-QSAR: Q(2) = 0.884, r(2) = 0.929, r(2)pred = 0.878; 3D-QSAR: q(2) = 0.616, r(2) = 0.987, r(2)pred = 0.905). Key descriptors selected by 2D-QSAR were in good agreement with the conclusions of 3D-QSAR, and the 3D-CoMSIA contour maps facilitated interpretation of the structure-activity relationship. A new molecular database was generated by molecular fragment replacement (MFR) and further evaluated with GA-MARS and CoMSIA prediction. Twenty-five pyridine and pyrimidine derivatives as novel potential BTK inhibitors were finally selected for further study. These results also demonstrated that our method can be a very efficient tool for the discovery of novel potent BTK inhibitors.

  17. Modeling and testing treated tumor growth using cubic smoothing splines.

    Science.gov (United States)

    Kong, Maiying; Yan, Jun

    2011-07-01

    Human tumor xenograft models are often used in preclinical study to evaluate the therapeutic efficacy of a certain compound or a combination of certain compounds. In a typical human tumor xenograft model, human carcinoma cells are implanted to subjects such as severe combined immunodeficient (SCID) mice. Treatment with test compounds is initiated after tumor nodule has appeared, and continued for a certain time period. Tumor volumes are measured over the duration of the experiment. It is well known that untreated tumor growth may follow certain patterns, which can be described by certain mathematical models. However, the growth patterns of the treated tumors with multiple treatment episodes are quite complex, and the usage of parametric models is limited. We propose using cubic smoothing splines to describe tumor growth for each treatment group and for each subject, respectively. The proposed smoothing splines are quite flexible in modeling different growth patterns. In addition, using this procedure, we can obtain tumor growth and growth rate over time for each treatment group and for each subject, and examine whether tumor growth follows certain growth pattern. To examine the overall treatment effect and group differences, the scaled chi-squared test statistics based on the fitted group-level growth curves are proposed. A case study is provided to illustrate the application of this method, and simulations are carried out to examine the performances of the scaled chi-squared tests. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  19. Diffeomorphism Spline

    Directory of Open Access Journals (Sweden)

    Wei Zeng

    2015-04-01

    Full Text Available Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional submanifolds in the diffeomorphism space by generalizing conventional splines. According to quasiconformal geometry theorem, each diffeomorphism determines a Beltrami differential on the source surface. Inversely, the diffeomorphism is determined by its Beltrami differential with normalization conditions. Therefore, the diffeomorphism space has one-to-one correspondence to the space of a special differential form. The convex combination of Beltrami differentials is still a Beltrami differential. Therefore, the conventional spline scheme can be generalized to the Beltrami differential space and, consequently, to the diffeomorphism space. Our experiments demonstrate the efficiency and efficacy of diffeomorphism splines. The diffeomorphism spline has many potential applications, such as surface registration, tracking and animation.

  20. The analysis of internet addiction scale using multivariate adaptive regression splines.

    Science.gov (United States)

    Kayri, M

    2010-01-01

    Determining real effects on internet dependency is too crucial with unbiased and robust statistical method. MARS is a new non-parametric method in use in the literature for parameter estimations of cause and effect based research. MARS can both obtain legible model curves and make unbiased parametric predictions. In order to examine the performance of MARS, MARS findings will be compared to Classification and Regression Tree (C&RT) findings, which are considered in the literature to be efficient in revealing correlations between variables. The data set for the study is taken from "The Internet Addiction Scale" (IAS), which attempts to reveal addiction levels of individuals. The population of the study consists of 754 secondary school students (301 female, 443 male students with 10 missing data). MARS 2.0 trial version is used for analysis by MARS method and C&RT analysis was done by SPSS. MARS obtained six base functions of the model. As a common result of these six functions, regression equation of the model was found. Over the predicted variable, MARS showed that the predictors of daily Internet-use time on average, the purpose of Internet-use, grade of students and occupations of mothers had a significant effect (Pdependency level prediction. The fact that MARS revealed extent to which the variable, which was considered significant, changes the character of the model was observed in this study.

  1. Momentum analysis by using a quintic spline model for the track

    CERN Document Server

    Wind, H

    1974-01-01

    A method is described to determine the momentum of a particle when the (inhomogeneous) analysing magnetic field and the position of at least three points on the track are known. The model of the field is essentially a cubic spline and that of the track a quintic spline. (8 refs).

  2. A Multidimensional Spline Based Global Nonlinear Aerodynamic Model for the Cessna Citation II

    NARCIS (Netherlands)

    De Visser, C.C.; Mulder, J.A.

    2010-01-01

    A new method is proposed for the identification of global nonlinear models of aircraft non-dimensional force and moment coefficients. The method is based on a recent type of multivariate spline, the multivariate simplex spline, which can accurately approximate very large, scattered nonlinear

  3. Cubic spline function interpolation in atmosphere models for the software development laboratory: Formulation and data

    Science.gov (United States)

    Kirkpatrick, J. C.

    1976-01-01

    A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.

  4. Smoothing X-ray spectra with regression splines and fast Fourier transform; Wygladzanie widm promieniowania X metodami regresyjnych funkcji sklejanych i szybkiej transformaty Fouriera

    Energy Technology Data Exchange (ETDEWEB)

    Antoniak, W.; Urbanski, P. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1996-12-31

    Regression splines and fast Fourier transform (FFT) methods were used for smoothing the X-ray spectra obtained from the proportional counters. The programs for computation and optimization of the smoothed spectra were written in MATLAB languages. It was shown, that application of the smoothed spectra in the multivariate calibration can result in a considerable reduction of measurement errors. (author). 8 refs, 9 figs.

  5. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Science.gov (United States)

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  6. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  7. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  8. Multivariate Adaptative Regression Splines (MARS, una alternativa para el análisis de series de tiempo

    Directory of Open Access Journals (Sweden)

    Jairo Vanegas

    2017-05-01

    Full Text Available Multivariate Adaptative Regression Splines (MARS es un método de modelación no paramétrico que extiende el modelo lineal incorporando no linealidades e interacciones de variables. Es una herramienta flexible que automatiza la construcción de modelos de predicción, seleccionando variables relevantes, transformando las variables predictoras, tratando valores perdidos y previniendo sobreajustes mediante un autotest. También permite predecir tomando en cuenta factores estructurales que pudieran tener influencia sobre la variable respuesta, generando modelos hipotéticos. El resultado final serviría para identificar puntos de corte relevantes en series de datos. En el área de la salud es poco utilizado, por lo que se propone como una herramienta más para la evaluación de indicadores relevantes en salud pública. Para efectos demostrativos se utilizaron series de datos de mortalidad de menores de 5 años de Costa Rica en el periodo 1978-2008.

  9. Modeling positional effects of regulatory sequences with spline transformations increases prediction accuracy of deep neural networks.

    Science.gov (United States)

    Avsec, Žiga; Barekatain, Mohammadamin; Cheng, Jun; Gagneur, Julien

    2017-11-16

    Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries, or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at goo.gl/3yMY5w. avsec@in.tum.de; gagneur@in.tum.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  10. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  11. Genetic and environmental smoothing of lactation curves with cubic splines.

    Science.gov (United States)

    White, I M; Thompson, R; Brotherstone, S

    1999-03-01

    Most approaches to modeling lactation curves involve parametric curves with fixed or random coefficients. In either case, the resulting models require the specification on an underlying parametric curve. The fitting of splines represents a semiparametric approach to the problem. In the context of animal breeding, cubic smoothing splines are particularly convenient because they can be incorporated into a suitably constructed mixed model. The potential for the use of splines in modeling lactation curves is explored with a simple example, and the results are compared with those using a random regression model. The spline model provides greater flexibility at the cost of additional computation. Splines are shown to be capable of picking up features of the lactation curve that are missed by the random regression model.

  12. Modeling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines

    Science.gov (United States)

    Kammann, P.

    2005-12-01

    Our intention is the modeling of seismic wave propagation from displacement measurements by seismographs at the Earth's surface. The elastic behaviour of the Earth is usually described by the Cauchy-Navier equation. A system of fundamental solutions for the Fourier transformed Cauchy-Navier equation are the Hansen vectors L, M and N. We apply an inverse Fourier transform to obtain an orthonormal function system depending on time and space. By means of this system we construct certain splines, which are then used for interpolating the given data. Compared to polynomial interpolation, splines have the advantage that they minimize some curvature measure and are, therefore, smoother. First, we test this method on a synthetic wave function. Afterwards, we apply it to realistic earthquake data. (P. Kammann, Modelling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines, Diploma Thesis, Geomathematics Group, Department of Mathematics, University of Kaiserslautern, 2005)

  13. (Non) linear regression modelling

    NARCIS (Netherlands)

    Cizek, P.; Gentle, J.E.; Hardle, W.K.; Mori, Y.

    2012-01-01

    We will study causal relationships of a known form between random variables. Given a model, we distinguish one or more dependent (endogenous) variables Y = (Y1,…,Yl), l ∈ N, which are explained by a model, and independent (exogenous, explanatory) variables X = (X1,…,Xp),p ∈ N, which explain or

  14. Validating the Multidimensional Spline Based Global Aerodynamic Model for the Cessna Citation II

    NARCIS (Netherlands)

    De Visser, C.C.; Mulder, J.A.

    2011-01-01

    The validation of aerodynamic models created using flight test data is a time consuming and often costly process. In this paper a new method for the validation of global nonlinear aerodynamic models based on multivariate simplex splines is presented. This new method uses the unique properties of the

  15. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    Science.gov (United States)

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  16. Non polynomial B-splines

    Science.gov (United States)

    Laksâ, Arne

    2015-11-01

    B-splines are the de facto industrial standard for surface modelling in Computer Aided design. It is comparable to bend flexible rods of wood or metal. A flexible rod minimize the energy when bending, a third degree polynomial spline curve minimize the second derivatives. B-spline is a nice way of representing polynomial splines, it connect polynomial splines to corner cutting techniques, which induces many nice and useful properties. However, the B-spline representation can be expanded to something we can call general B-splines, i.e. both polynomial and non-polynomial splines. We will show how this expansion can be done, and the properties it induces, and examples of non-polynomial B-spline.

  17. Decomposition of LiDAR waveforms by B-spline-based modeling

    Science.gov (United States)

    Shen, Xiang; Li, Qing-Quan; Wu, Guofeng; Zhu, Jiasong

    2017-06-01

    Waveform decomposition is a widely used technique for extracting echoes from full-waveform LiDAR data. Most previous studies recommended the Gaussian decomposition approach, which employs the Gaussian function in laser pulse modeling. As the Gaussian-shape assumption is not always satisfied for real LiDAR waveforms, some other probability distributions (e.g., the lognormal distribution, the generalized normal distribution, and the Burr distribution) have also been introduced by researchers to fit sharply-peaked and/or heavy-tailed pulses. However, these models cannot be universally used, because they are only suitable for processing the LiDAR waveforms in particular shapes. In this paper, we present a new waveform decomposition algorithm based on the B-spline modeling technique. LiDAR waveforms are not assumed to have a priori shapes but rather are modeled by B-splines, and the shape of a received waveform is treated as the mixture of finite transmitted pulses after translation and scaling transformation. The performance of the new model was tested using two full-waveform data sets acquired by a Riegl LMS-Q680i laser scanner and an Optech Aquarius laser bathymeter, comparing with three classical waveform decomposition approaches: the Gaussian, generalized normal, and lognormal distribution-based models. The experimental results show that the B-spline model performed the best in terms of waveform fitting accuracy, while the generalized normal model yielded the worst performance in the two test data sets. Riegl waveforms have nearly Gaussian pulse shapes and were well fitted by the Gaussian mixture model, while the B-spline-based modeling algorithm produced a slightly better result by further reducing 6.4% of fitting residuals, largely benefiting from alleviating the adverse impact of the ringing effect. The pulse shapes of Optech waveforms, on the other hand, are noticeably right-skewed. The Gaussian modeling results deviated significantly from original signals, and

  18. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  19. Ridge Regression for Interactive Models.

    Science.gov (United States)

    Tate, Richard L.

    1988-01-01

    An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are…

  20. A fourth order spline collocation approach for a business cycle model

    Science.gov (United States)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  1. Modeling of type-2 fuzzy cubic B-spline surface for flood data problem in Malaysia

    Science.gov (United States)

    Bidin, Mohd Syafiq; Wahab, Abd. Fatah

    2017-08-01

    Malaysia possesses a low and sloping land areas which may cause flood. The flood phenomenon can be analyzed if the surface data of the study area can be modeled by geometric modeling. Type-2 fuzzy data for the flood data is defined using type-2 fuzzy set theory in order to solve the uncertainty of complex data. Then, cubic B-spline surface function is used to produce a smooth surface. Three main processes are carried out to find a solution to crisp type-2 fuzzy data which is fuzzification (α-cut operation), type-reduction and defuzzification. Upon conducting these processes, Type-2 Fuzzy Cubic B-Spline Surface Model is applied to visualize the surface data of the flood areas that are complex uncertainty.

  2. Spline- and wavelet-based models of neural activity in response to natural visual stimulation.

    Science.gov (United States)

    Gerhard, Felipe; Szegletes, Luca

    2012-01-01

    We present a comparative study of the performance of different basis functions for the nonparametric modeling of neural activity in response to natural stimuli. Based on naturalistic video sequences, a generative model of neural activity was created using a stochastic linear-nonlinear-spiking cascade. The temporal dynamics of the spiking response is well captured with cubic splines with equidistant knot spacings. Whereas a sym4-wavelet decomposition performs competitively or only slightly worse than the spline basis, Haar wavelets (or histogram-based models) seem unsuitable for faithfully describing the temporal dynamics of the sensory neurons. This tendency was confirmed with an application to a real data set of spike trains recorded from visual cortex of the awake monkey.

  3. Assessing time-by-covariate interactions in relative survival models using restrictive cubic spline functions.

    Science.gov (United States)

    Bolard, P; Quantin, C; Abrahamowicz, M; Esteve, J; Giorgi, R; Chadha-Boreham, H; Binquet, C; Faivre, J

    2002-01-01

    The Cox model is widely used in the evaluation of prognostic factors in clinical research. However, in population-based studies, which assess long-term survival of unselected populations, relative-survival models are often considered more appropriate. In both approaches, the validity of proportional hazards hypothesis should be evaluated. We propose a new method in which restricted cubic spline functions are employed to model time-by-covariate interactions in relative survival analyses. The method allows investigation of the shape of possible dependence of the covariate effect on time without having to specify a particular functional form. Restricted cubic spline functions allow graphing of such time-by-covariate interactions, to test formally the proportional hazards assumption, and also to test the linearity of the time-by-covariate interaction. Application of our new method to assess mortality in colon cancer provides strong evidence against the proportional hazards hypothesis, which is rejected for all prognostic factors. The results corroborate previous analyses of similar data-sets, suggesting the importance of both modelling of non-proportional hazards and relative survival approach. We also demonstrate the advantages of using restricted cubic spline functions for modelling non-proportional hazards in relative-survival analysis. The results provide new insights in the estimated impact of older age and of period of diagnosis. Using restricted cubic splines in a relative survival model allows the representation of both simple and complex patterns of changes in relative risks over time, with a single parsimonious model without a priori assumptions about the functional form of these changes.

  4. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  5. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  6. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  7. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Science.gov (United States)

    Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.

    2009-01-01

    Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional

  8. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  9. Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.

  10. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  11. Microscopic Model of Automobile Lane-changing Virtual Desire Trajectory by Spline Curves

    Directory of Open Access Journals (Sweden)

    Yulong Pei

    2010-05-01

    Full Text Available With the development of microscopic traffic simulation models, they have increasingly become an important tool for transport system analysis and management, which assist the traffic engineer to investigate and evaluate the performance of transport network systems. Lane-changing model is a vital component in any traffic simulation model, which could improve road capacity and reduce vehicles delay so as to reduce the likelihood of congestion occurrence. Therefore, this paper addresses the virtual desire trajectory, a vital part to investigate the behaviour divided into four phases. Based on the boundary conditions, β-spline curves and the corresponding reverse algorithm are introduced firstly. Thus, the relation between the velocity and length of lane-changing is constructed, restricted by the curvature, steering velocity and driving behaviour. Then the virtual desire trajectory curves are presented by Matlab and the error analysis results prove that this proposed description model has higher precision in automobile lane-changing process reconstruction, compared with the surveyed result. KEY WORDS: traffic simulation, lane-changing model, virtual desire trajectory, β-spline curves, driving behaviour

  12. A Mathematical Spline-Based Model of Cardiac Left Ventricle Anatomy and Morphology

    Directory of Open Access Journals (Sweden)

    Sergei Pravdin

    2016-10-01

    Full Text Available Computer simulation of normal and diseased human heart activity requires a 3D anatomical model of the myocardium, including myofibers. For clinical applications, such a model has to be constructed based on routine methods of cardiac visualization, such as sonography. Symmetrical models are shown to be too rigid, so an analytical non-symmetrical model with enough flexibility is necessary. Based on previously-made anatomical models of the left ventricle, we propose a new, much more flexible spline-based analytical model. The model is fully described and verified against DT-MRI data. We show a way to construct it on the basis of sonography data. To use this model in further physiological simulations, we propose a numerical method to utilize finite differences in solving the reaction-diffusion problem together with an example of scroll wave dynamics simulation.

  13. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    Science.gov (United States)

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  14. Fitting Cox Models with Doubly Censored Data Using Spline-Based Sieve Marginal Likelihood

    Science.gov (United States)

    Li, Zhiguo; Owzar, Kouros

    2015-01-01

    In some applications, the failure time of interest is the time from an originating event to a failure event, while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline-based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual nonparametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time. PMID:27239090

  15. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  16. Regional Densification of a Global VTEC Model Based on B-Spline Representations

    Science.gov (United States)

    Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas

    2017-04-01

    The project OPTIMAP is a joint initiative of the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal of the project is the development of an operational tool for ionospheric mapping and prediction (OPTIMAP). Two key features of the project are the combination of different satellite observation techniques (GNSS, satellite altimetry, radio occultations and DORIS) and the regional densification as a remedy against problems encountered with the inhomogeneous data distribution. Since the data from space-geoscientific mission which can be used for modeling ionospheric parameters, such as the Vertical Total Electron Content (VTEC) or the electron density, are distributed rather unevenly over the globe at different altitudes, appropriate modeling approaches have to be developed to handle this inhomogeneity. Our approach is based on a two-level strategy. To be more specific, in the first level we compute a global VTEC model with a moderate regional and spectral resolution which will be complemented in the second level by a regional model in a densification area. The latter is a region characterized by a dense data distribution to obtain a high spatial and spectral resolution VTEC product. Additionally, the global representation means a background model for the regional one to avoid edge effects at the boundaries of the densification area. The presented approach based on a global and a regional model part, i.e. the consideration of a regional densification is called the Two-Level VTEC Model (TLVM). The global VTEC model part is based on a series expansion in terms of polynomial B-Splines in latitude direction and trigonometric B-Splines in longitude direction. The additional regional model part is set up by a series expansion in terms of polynomial B-splines for

  17. Review Random regression test-day model for the analysis of dairy ...

    African Journals Online (AJOL)

    jannes

    Splines are readily fitted within the mixed model analyses (Verbyla et al., 1999; Ruppert et al., 2003). White et al. (1999) used cubic splines, while Torres & Quaas (2001) used B-splines with 10 knots in separate RR analyses of test-day records of dairy cows. Too many knots would increase model complexity, while too few ...

  18. Geometric modelling of channel present in reservoir petroleum using Bezier splines; Modelagem da geometria de paleocanais presentes em reservatorios petroliferos usando splines de Bezier

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Carlos Eduardo S. [Universidade Federal de Campina Grande, PB (Brazil). Programa de Recursos Humanos 25 da ANP]. E-mail: carlos@dme.ufcg.edu.br; Silva, Rosana M. da [Universidade Federal de Campina Grande, PB (Brazil). Dept. de Matematica e Estatistica]. E-mail: rosana@dme.ufcg.edu.br

    2004-07-01

    This work presents an implementation of a synthetic model of a channel found in oil reservoir. The generation these models is one of the steps to the characterization and simulation of the equal probable three-dimensional geological scenery. O implemented model was obtained from fitting techniques of geometric modeling of curves and surfaces to the geological parameters (width, thickness, sinuosity and preferential direction) that defines the form to be modeled. The parameter sinuosity is related with the parameter wave length and the local amplitude of the channel, the parameter preferential direction indicates the way of the flow and the declivity of the channel. The modeling technique used to represent the surface of the channel is the sweeping technique, the consist in effectuate a translation operation from a curve along a guide curve. The guide curve, in our implementation, was generated by the interpolation of points obtained form sampled values or simulated of the parameter sinuosity, using the cubic splines of Bezier technique. A semi-ellipse, determinate by the parameter width and thickness, representing a transversal section of the channel, is the transferred curve through the guide curve, generating the channel surface. (author)

  19. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  20. A Bézier-Spline-based Model for the Simulation of Hysteresis in Variably Saturated Soil

    Science.gov (United States)

    Cremer, Clemens; Peche, Aaron; Thiele, Luisa-Bianca; Graf, Thomas; Neuweiler, Insa

    2017-04-01

    Most transient variably saturated flow models neglect hysteresis in the p_c-S-relationship (Beven, 2012). Such models tend to inadequately represent matrix potential and saturation distribution. Thereby, when simulating flow and transport processes, fluid and solute fluxes might be overestimated (Russo et al., 1989). In this study, we present a simple, computationally efficient and easily applicable model that enables to adequately describe hysteresis in the p_c-S-relationship for variably saturated flow. This model can be seen as an extension to the existing play-type model (Beliaev and Hassanizadeh, 2001), where scanning curves are simplified as vertical lines between main imbibition and main drainage curve. In our model, we use continuous linear and Bézier-Spline-based functions. We show the successful validation of the model by numerically reproducing a physical experiment by Gillham, Klute and Heermann (1976) describing primary drainage and imbibition in a vertical soil column. With a deviation of 3%, the simple Bézier-Spline-based model performs significantly better that the play-type approach, which deviates by 30% from the experimental results. Finally, we discuss the realization of physical experiments in order to extend the model to secondary scanning curves and in order to determine scanning curve steepness. {Literature} Beven, K.J. (2012). Rainfall-Runoff-Modelling: The Primer. John Wiley and Sons. Russo, D., Jury, W. A., & Butters, G. L. (1989). Numerical analysis of solute transport during transient irrigation: 1. The effect of hysteresis and profile heterogeneity. Water Resources Research, 25(10), 2109-2118. https://doi.org/10.1029/WR025i010p02109. Beliaev, A.Y. & Hassanizadeh, S.M. (2001). A Theoretical Model of Hysteresis and Dynamic Effects in the Capillary Relation for Two-phase Flow in Porous Media. Transport in Porous Media 43: 487. doi:10.1023/A:1010736108256. Gillham, R., Klute, A., & Heermann, D. (1976). Hydraulic properties of a porous

  1. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  2. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    models to the panel context. The strategy consists of model specification based on homogeneity tests, parameter estimation, and model evaluation, including tests of parameter constancy and no remaining heterogeneity. The model is applied to describing firms' investment decisions in the presence...

  3. Point based interactive image segmentation using multiquadrics splines

    Science.gov (United States)

    Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna

    2017-05-01

    Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6

  4. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    Science.gov (United States)

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (P<0.001 vs. females). Overall, our study showed that developmental exposures to low-doses of bisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  5. Digital elevation model production from scanned topographic contour maps via thin plate spline interpolation

    International Nuclear Information System (INIS)

    Soycan, Arzu; Soycan, Metin

    2009-01-01

    GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)

  6. Projecting cancer incidence using age-period-cohort models incorporating restricted cubic splines.

    Science.gov (United States)

    Rutherford, Mark J; Thompson, John R; Lambert, Paul C

    2012-11-05

    Age-period-cohort models provide a useful method for modeling incidence and mortality rates. There is great interest in estimating the rates of disease at given future time-points in order that plans can be made for the provision of the required future services. In the setting of using age-period-cohort models incorporating restricted cubic splines, a new technique for projecting incidence is proposed. The new technique projects the period and cohort terms linearly from 10 years within the range of the available data in order to give projections that are based on recent trends. The method is validated via a comparison with existing methods in the setting of Finnish cancer registry data. The reasons for the improvements seen for the newly proposed method are twofold. Firstly, improvements are seen due to the finer splitting of the timescale to give a more continuous estimate of the incidence rate. Secondly, the new method uses more recent trends to dictate the future projections than previously proposed methods.

  7. Using tensor product splines in modeling exposure–time–response relationships: Application to the Colorado Plateau Uranium Miners cohort

    OpenAIRE

    Berhane, Kiros; Hauptmann, Michael; Langholz, Bryan

    2008-01-01

    An adequate depiction of exposure–time–response relationships is important in assessing public health implications of an occupational or environmental exposure. Recent advances have focused on flexible modeling of the overall shape of latency. Methods are needed to allow for varying shapes of latency under different exposure profiles. A tensor product spline model is proposed for describing exposure–response relationships for protracted time-dependent occupational exposure histories in epidem...

  8. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  9. Parametric and Nonparametric Empirical Regression Models: Case Study of Copper Bromide Laser Generation

    Directory of Open Access Journals (Sweden)

    S. G. Gocheva-Ilieva

    2010-01-01

    Full Text Available In order to model the output laser power of a copper bromide laser with wavelengths of 510.6 and 578.2 nm we have applied two regression techniques—multiple linear regression and multivariate adaptive regression splines. The models have been constructed on the basis of PCA factors for historical data. The influence of first- and second-order interactions between predictors has been taken into account. The models are easily interpreted and have good prediction power, which is established from the results of their validation. The comparison of the derived models shows that these based on multivariate adaptive regression splines have an advantage over the others. The obtained results allow for the clarification of relationships between laser generation and the observed laser input variables, for better determining their influence on laser generation, in order to improve the experimental setup and laser production technology. They can be useful for evaluation of known experiments as well as for prediction of future experiments. The developed modeling methodology is also applicable for a wide range of similar laser devices—metal vapor lasers and gas lasers.

  10. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  11. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    International Nuclear Information System (INIS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-01-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested

  12. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Energy Technology Data Exchange (ETDEWEB)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  13. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Science.gov (United States)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  14. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  15. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  16. Regression Models for Market-Shares

    DEFF Research Database (Denmark)

    Birch, Kristina; Olsen, Jørgen Kai; Tjur, Tue

    2005-01-01

    On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put on the interpretat......On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put...... on the interpretation of the parameters in relation to models for the total sales based on discrete choice models.Key words and phrases. MCI model, discrete choice model, market-shares, price elasitcity, regression model....

  17. Bias-corrected quantile regression estimation of censored regression models

    NARCIS (Netherlands)

    Cizek, Pavel; Sadikoglu, Serhan

    2018-01-01

    In this paper, an extension of the indirect inference methodology to semiparametric estimation is explored in the context of censored regression. Motivated by weak small-sample performance of the censored regression quantile estimator proposed by Powell (J Econom 32:143–155, 1986a), two- and

  18. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  19. Comparison of fractional splines with polynomial splines; An Application on under-five year’s child mortality data in Pakistan (1960-2012

    Directory of Open Access Journals (Sweden)

    Saira Esar Esar

    2017-06-01

    Full Text Available Cubic splines are commonly used for capturing the changes in economic analysis. This is because of the fact that traditional regression including polynomial regression fail to capture the underlying changes in the corresponding response variables. Moreover, these variables do not change monotonically, i.e. there are discontinuities in the trend of these variables over a period of time. The objective of this research is to explain the movement of under-five child mortality in Pakistan over the past few decades through a combination of statistical techniques. While cubic splines explain the movement of under-five child mortality to a large extent, we cannot deny the possibility that splines with fractional powers might better explain the underlying movement. . Hence, we estimated the value of fractional power by nonlinear regression method and used it to develop the fractional splines. Although, the fractional spline model may have the potential to improve upon the cubic spline model, it does not demonstrate a real improvement in results of this case, but, perhaps, with a different data set.

  20. Comparison of parametric, orthogonal, and spline functions to model individual lactation curves for milk yield in Canadian Holsteins

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-11-01

    Full Text Available Test day records for milk yield of 57,390 first lactation Canadian Holsteins were analyzed with a linear model that included the fixed effects of herd-test date and days in milk (DIM interval nested within age and calving season. Residuals from this model were analyzed as a new variable and fitted with a five parameter model, fourth-order Legendre polynomials, with linear, quadratic and cubic spline models with three knots. The fit of the models was rather poor, with about 30-40% of the curves showing an adjusted R-square lower than 0.20 across all models. Results underline a great difficulty in modelling individual deviations around the mean curve for milk yield. However, the Ali and Schaeffer (5 parameter model and the fourth-order Legendre polynomials were able to detect two basic shapes of individual deviations among the mean curve. Quadratic and, especially, cubic spline functions had better fitting performances but a poor predictive ability due to their great flexibility that results in an abrupt change of the estimated curve when data are missing. Parametric and orthogonal polynomials seem to be robust and affordable under this standpoint.

  1. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  2. Structured Dimensionality Reduction for Additive Model Regression

    OpenAIRE

    Fawzi, Alhussein; Fiot, Jean-Baptiste; Chen, Bei; Sinn, Mathieu; Frossard, Pascal

    2016-01-01

    Additive models are regression methods which model the response variable as the sum of univariate transfer functions of the input variables. Key benefits of additive models are their accuracy and interpretability on many real-world tasks. Additive models are however not adapted to problems involving a large number (e.g., hundreds) of input variables, as they are prone to overfitting in addition to losing interpretability. In this paper, we introduce a novel framework for applying additive ...

  3. Fit of different functions to the individual deviations in random regression test day models for milk yield in dairy cattle

    Directory of Open Access Journals (Sweden)

    L.R. Schaeffer

    2010-04-01

    Full Text Available The shape of individual deviations of milk yield for dairy cattle from the fixed part of a random regression test day model (RRTDM was investigated. Data were 53,217 TD records for milk yield of 6,229 first lactation Canadian Holsteins in Ontario. Data were fitted with a model that included the fixed effects of herd-testdate, DIM interval nested within age and season of calving. Residuals of the model were then fitted with the following functions: Ali and Schaeffer 5 parameter model, fourth-order Legendre Polynomials, and cubic spline with three, four or five knots. Result confirm the great variability of shape that can be found when individual lactation are modeled. Cubic splines gave better fitting pe4rformances although together with a marked tendency to yield aberrant estimates at the edge of the lactation trajectory.

  4. Nonparametric and semiparametric dynamic additive regression models

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Martinussen, Torben

    Dynamic additive regression models provide a flexible class of models for analysis of longitudinal data. The approach suggested in this work is suited for measurements obtained at random time points and aims at estimating time-varying effects. Both fully nonparametric and semiparametric models can...... in special cases. We investigate the finite sample properties of the estimators and conclude that the asymptotic results are valid for even samll samples....

  5. Testing the Performance of Cubic Splines and Nelson-Siegel Model for Estimating the Zero-coupon Yield Curve

    Directory of Open Access Journals (Sweden)

    Lorenčič Eva

    2016-06-01

    Full Text Available Understanding the relationship between interest rates and term to maturity of securities is a prerequisite for developing financial theory and evaluating whether it holds up in the real world; therefore, such an understanding lies at the heart of monetary and financial economics. Accurately fitting the term structure of interest rates is the backbone of a smoothly functioning financial market, which is why the testing of various models for estimating and predicting the term structure of interest rates is an important topic in finance that has received considerable attention for many decades. In this paper, we empirically contrast the performance of cubic splines and the Nelson-Siegel model by estimating the zero-coupon yields of Austrian government bonds. The main conclusion that can be drawn from the results of the calculations is that the Nelson-Siegel model outperforms cubic splines at the short end of the yield curve (up to 2 years, whereas for medium-term maturities (2 to 10 years the fitting performance of both models is comparable.

  6. Model building in nonproportional hazard regression.

    Science.gov (United States)

    Rodríguez-Girondo, Mar; Kneib, Thomas; Cadarso-Suárez, Carmen; Abu-Assi, Emad

    2013-12-30

    Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  8. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  9. A Skew-Normal Mixture Regression Model

    Science.gov (United States)

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  10. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    The objective is to minimize the processing time and computer memory required .... Survey. 65 time to acquire extra GPR or seismic data for large sites and picking the first arrival time. 66 to provide the needed datasets for the joint inversion are also .... The data utilized for the regression modelling was acquired from ground.

  11. Data assimilation using Bayesian filters and B-spline geological models

    KAUST Repository

    Duan, Lian

    2011-04-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  12. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  13. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  14. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  15. AIRLINE ACTIVITY FORECASTING BY REGRESSION MODELS

    Directory of Open Access Journals (Sweden)

    Н. Білак

    2012-04-01

    Full Text Available Proposed linear and nonlinear regression models, which take into account the equation of trend and seasonality indices for the analysis and restore the volume of passenger traffic over the past period of time and its prediction for future years, as well as the algorithm of formation of these models based on statistical analysis over the years. The desired model is the first step for the synthesis of more complex models, which will enable forecasting of passenger (income level airline with the highest accuracy and time urgency.

  16. The modeling of quadratic B-splines surfaces for the tomographic reconstruction in the FCC- type-riser

    International Nuclear Information System (INIS)

    Vasconcelos, Geovane Vitor; Dantas, Carlos Costa; Melo, Silvio de Barros; Pires, Renan Ferraz

    2009-01-01

    The 3D tomography reconstruction has been a profitable alternative in the analysis of the FCC-type- riser (Fluid Catalytic Cracking), for appropriately keeping track of the sectional catalyst concentration distribution in the process of oil refining. The method of tomography reconstruction proposed by M. Azzi and colleagues (1991) uses a relatively small amount of trajectories (from 3 to 5) and projections (from 5 to 7) of gamma rays, a desirable feature in the industrial process tomography. Compared to more popular methods, such as the FBP (Filtered Back Projection), which demands a much higher amount of gamma rays projections, the method by Azzi et al. is more appropriate for the industrial process, where the physical limitations and the cost of the process require more economical arrangements. The use of few projections and trajectories facilitates the diagnosis in the flow dynamical process. This article proposes an improvement in the basis functions introduced by Azzi et al., through the use of quadratic B-splines functions. The use of B-splines functions makes possible a smoother surface reconstruction of the density distribution, since the functions are continuous and smooth. This work describes how the modeling can be done. (author)

  17. Geographically weighted regression model on poverty indicator

    Science.gov (United States)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  18. Spline models of contemporary, 2030, 2060, and 2090 climates for Mexico and their use in understanding climate-change impacts on the vegetation

    Science.gov (United States)

    Cuauhtemoc Saenz-Romero; Gerald E. Rehfeldt; Nicholas L. Crookston; Pierre Duval; Remi St-Amant; Jean Beaulieu; Bryce A. Richardson

    2010-01-01

    Spatial climate models were developed for Mexico and its periphery (southern USA, Cuba, Belize and Guatemala) for monthly normals (1961-1990) of average, maximum and minimum temperature and precipitation using thin plate smoothing splines of ANUSPLIN software on ca. 3,800 observations. The fit of the model was generally good: the signal was considerably less than one-...

  19. Adaptive regression for modeling nonlinear relationships

    CERN Document Server

    Knafl, George J

    2016-01-01

    This book presents methods for investigating whether relationships are linear or nonlinear and for adaptively fitting appropriate models when they are nonlinear. Data analysts will learn how to incorporate nonlinearity in one or more predictor variables into regression models for different types of outcome variables. Such nonlinear dependence is often not considered in applied research, yet nonlinear relationships are common and so need to be addressed. A standard linear analysis can produce misleading conclusions, while a nonlinear analysis can provide novel insights into data, not otherwise possible. A variety of examples of the benefits of modeling nonlinear relationships are presented throughout the book. Methods are covered using what are called fractional polynomials based on real-valued power transformations of primary predictor variables combined with model selection based on likelihood cross-validation. The book covers how to formulate and conduct such adaptive fractional polynomial modeling in the s...

  20. Characterizing vaccine-associated risks using cubic smoothing splines.

    Science.gov (United States)

    Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W

    2012-11-15

    Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.

  1. Quantifying spatial disparities in neonatal mortality using a structured additive regression model.

    Directory of Open Access Journals (Sweden)

    Lawrence N Kazembe

    Full Text Available BACKGROUND: Neonatal mortality contributes a large proportion towards early childhood mortality in developing countries, with considerable geographical variation at small areas within countries. METHODS: A geo-additive logistic regression model is proposed for quantifying small-scale geographical variation in neonatal mortality, and to estimate risk factors of neonatal mortality. Random effects are introduced to capture spatial correlation and heterogeneity. The spatial correlation can be modelled using the Markov random fields (MRF when data is aggregated, while the two dimensional P-splines apply when exact locations are available, whereas the unstructured spatial effects are assigned an independent Gaussian prior. Socio-economic and bio-demographic factors which may affect the risk of neonatal mortality are simultaneously estimated as fixed effects and as nonlinear effects for continuous covariates. The smooth effects of continuous covariates are modelled by second-order random walk priors. Modelling and inference use the empirical Bayesian approach via penalized likelihood technique. The methodology is applied to analyse the likelihood of neonatal deaths, using data from the 2000 Malawi demographic and health survey. The spatial effects are quantified through MRF and two dimensional P-splines priors. RESULTS: Findings indicate that both fixed and spatial effects are associated with neonatal mortality. CONCLUSIONS: Our study, therefore, suggests that the challenge to reduce neonatal mortality goes beyond addressing individual factors, but also require to understanding unmeasured covariates for potential effective interventions.

  2. Bayesian Inference of a Multivariate Regression Model

    Directory of Open Access Journals (Sweden)

    Marick S. Sinay

    2014-01-01

    Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.

  3. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  4. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  5. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  6. Multitask Quantile Regression under the Transnormal Model.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2016-01-01

    We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.

  7. A Blossoming Development of Splines

    CERN Document Server

    Mann, Stephen

    2006-01-01

    In this lecture, we study Bezier and B-spline curves and surfaces, mathematical representations for free-form curves and surfaces that are common in CAD systems and are used to design aircraft and automobiles, as well as in modeling packages used by the computer animation industry. Bezier/B-splines represent polynomials and piecewise polynomials in a geometric manner using sets of control points that define the shape of the surface. The primary analysis tool used in this lecture is blossoming, which gives an elegant labeling of the control points that allows us to analyze their properties geom

  8. Persistency of lactation using random regression models and different fixed regression modeling approaches

    Directory of Open Access Journals (Sweden)

    Jaime Araujo Cobuci

    2012-09-01

    Full Text Available Milk yield test-day records on the first three lactations of 25,500 Holstein cows were used to estimate genetic parameters and predict breeding values for nine measures of persistency and 305-d milk yield in a random regression animal model using two criteria to define the fixed regression. Legendre polynomials of fourth and fifth orders were used to model the fixed and random regressions of lactation curves. The fixed regressions were adjusted for average milk yield on populations (single or subpopulations (multiple formed by cows that calved at the same age and in the same season. Akaike Information (AIC and Bayesian Information (BIC criteria indicated that models with multiple regression lactation curves had the best fit to test-day milk records of first lactations, while models with a single regression curve had the best fit for the second and third lactations. Heritability and genetic correlation estimates between persistency and milk yield differed significantly depending on the lactation order and the measures of persistency used. These parameters did not differ significantly depending on the criteria used for defining the fixed regressions for lactation curves. In general, the heritability estimates were higher for first (0.07 to 0.43, followed by the second (0.08 to 0.21 and third (0.04 to 0.10 lactation. The rank of sires resulting from the processes of genetic evaluation for milk yield or persistency using random regression models differed according to the criteria used for determining the fixed regression of lactation curve.

  9. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  10. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  11. Inferring gene regression networks with model trees

    Directory of Open Access Journals (Sweden)

    Aguilar-Ruiz Jesus S

    2010-10-01

    Full Text Available Abstract Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear

  12. Error Covariance Penalized Regression: A novel multivariate model combining penalized regression with multivariate error structure.

    Science.gov (United States)

    Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C

    2018-06-29

    A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Probabilistic Solar Forecasting Using Quantile Regression Models

    Directory of Open Access Journals (Sweden)

    Philippe Lauret

    2017-10-01

    Full Text Available In this work, we assess the performance of three probabilistic models for intra-day solar forecasting. More precisely, a linear quantile regression method is used to build three models for generating 1 h–6 h-ahead probabilistic forecasts. Our approach is applied to forecasting solar irradiance at a site experiencing highly variable sky conditions using the historical ground observations of solar irradiance as endogenous inputs and day-ahead forecasts as exogenous inputs. Day-ahead irradiance forecasts are obtained from the Integrated Forecast System (IFS, a Numerical Weather Prediction (NWP model maintained by the European Center for Medium-Range Weather Forecast (ECMWF. Several metrics, mainly originated from the weather forecasting community, are used to evaluate the performance of the probabilistic forecasts. The results demonstrated that the NWP exogenous inputs improve the quality of the intra-day probabilistic forecasts. The analysis considered two locations with very dissimilar solar variability. Comparison between the two locations highlighted that the statistical performance of the probabilistic models depends on the local sky conditions.

  14. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  15. An Additive-Multiplicative Cox-Aalen Regression Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...

  16. Correlation studies for B-spline modeled F2 Chapman parameters obtained from FORMOSAT-3/COSMIC data

    Directory of Open Access Journals (Sweden)

    M. Limberger

    2014-12-01

    Full Text Available The determination of ionospheric key quantities such as the maximum electron density of the F2 layer NmF2, the corresponding F2 peak height hmF2 and the F2 scale height HF2 are of high relevance in 4-D ionosphere modeling to provide information on the vertical structure of the electron density (Ne. The Ne distribution with respect to height can, for instance, be modeled by the commonly accepted F2 Chapman layer. An adequate and observation driven description of the vertical Ne variation can be obtained from electron density profiles (EDPs derived by ionospheric radio occultation measurements between GPS and low Earth orbiter (LEO satellites. For these purposes, the six FORMOSAT-3/COSMIC (F3/C satellites provide an excellent opportunity to collect EDPs that cover most of the ionospheric region, in particular the F2 layer. For the contents of this paper, F3/C EDPs have been exploited to determine NmF2, hmF2 and HF2 within a regional modeling approach. As mathematical base functions, endpoint-interpolating polynomial B-splines are considered to model the key parameters with respect to longitude, latitude and time. The description of deterministic processes and the verification of this modeling approach have been published previously in Limberger et al. (2013, whereas this paper should be considered as an extension dealing with related correlation studies, a topic to which less attention has been paid in the literature. Relations between the B-spline series coefficients regarding specific key parameters as well as dependencies between the three F2 Chapman key parameters are in the main focus. Dependencies are interpreted from the post-derived correlation matrices as a result of (1 a simulated scenario without data gaps by taking dense, homogenously distributed profiles into account and (2 two real data scenarios on 1 July 2008 and 1 July 2012 including sparsely, inhomogeneously distributed F3/C EDPs. Moderate correlations between hmF2 and HF2 as

  17. Interpolating cubic splines

    CERN Document Server

    Knott, Gary D

    2000-01-01

    A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi­ cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi­ nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...

  18. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  19. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    Science.gov (United States)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  20. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  1. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  2. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  3. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  4. Comparison of Spatial Interpolation and Regression Analysis Models for an Estimation of Monthly Near Surface Air Temperature in China

    Directory of Open Access Journals (Sweden)

    Mengmeng Wang

    2017-12-01

    Full Text Available Near surface air temperature (NSAT is a primary descriptor of terrestrial environmental conditions. In recent decades, many efforts have been made to develop various methods for obtaining spatially continuous NSAT from gauge or station observations. This study compared three spatial interpolation (i.e., Kriging, Spline, and Inversion Distance Weighting (IDW and two regression analysis (i.e., Multiple Linear Regression (MLR and Geographically Weighted Regression (GWR models for predicting monthly minimum, mean, and maximum NSAT in China, a domain with a large area, complex topography, and highly variable station density. This was conducted for a period of 12 months of 2010. The accuracy of the GWR model is better than the MLR model with an improvement of about 3 °C in the Root Mean Squared Error (RMSE, which indicates that the GWR model is more suitable for predicting monthly NSAT than the MLR model over a large scale. For three spatial interpolation models, the RMSEs of the predicted monthly NSAT are greater in the warmer months, and the mean RMSEs of the predicted monthly mean NSAT for 12 months in 2010 are 1.56 °C for the Kriging model, 1.74 °C for the IDW model, and 2.39 °C for the Spline model, respectively. The GWR model is better than the Kriging model in the warmer months, while the Kriging model is superior to the GWR model in the colder months. The total precision of the GWR model is slightly higher than the Kriging model. The assessment result indicated that the higher standard deviation and the lower mean of NSAT from sample data would be associated with a better performance of predicting monthly NSAT using spatial interpolation models.

  5. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... The research identified logistic regression as a powerful tool for analysis of DMSMS and further developed twenty models attempting to identify the "best" way to model and predict DMSMS using logistic regression...

  6. Straight-sided Spline Optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...

  7. Testing for cubic smoothing splines under dependent data.

    Science.gov (United States)

    Nummi, Tapio; Pan, Jianxin; Siren, Tarja; Liu, Kun

    2011-09-01

    In most research on smoothing splines the focus has been on estimation, while inference, especially hypothesis testing, has received less attention. By defining design matrices for fixed and random effects and the structure of the covariance matrices of random errors in an appropriate way, the cubic smoothing spline admits a mixed model formulation, which places this nonparametric smoother firmly in a parametric setting. Thus nonlinear curves can be included with random effects and random coefficients. The smoothing parameter is the ratio of the random-coefficient and error variances and tests for linear regression reduce to tests for zero random-coefficient variances. We propose an exact F-test for the situation and investigate its performance in a real pine stem data set and by simulation experiments. Under certain conditions the suggested methods can also be applied when the data are dependent. © 2010, The International Biometric Society.

  8. Modelling Issues in Kernel Ridge Regression

    NARCIS (Netherlands)

    P. Exterkate (Peter)

    2011-01-01

    textabstractKernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular

  9. Designing interactively with elastic splines

    DEFF Research Database (Denmark)

    Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie

    2018-01-01

    We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....

  10. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  11. The Asymptotic Behavior of Particle Size Distribution Undergoing Brownian Coagulation Based on the Spline-Based Method and TEMOM Model

    Directory of Open Access Journals (Sweden)

    Qing He

    2018-01-01

    Full Text Available In this paper, the particle size distribution is reconstructed using finite moments based on a converted spline-based method, in which the number of linear system of equations to be solved reduced from 4m × 4m to (m + 3 × (m + 3 for (m + 1 nodes by using cubic spline compared to the original method. The results are verified by comparing with the reference firstly. Then coupling with the Taylor-series expansion moment method, the evolution of particle size distribution undergoing Brownian coagulation and its asymptotic behavior are investigated.

  12. Bayesian Estimation of Multivariate Latent Regression Models: Gauss versus Laplace

    Science.gov (United States)

    Culpepper, Steven Andrew; Park, Trevor

    2017-01-01

    A latent multivariate regression model is developed that employs a generalized asymmetric Laplace (GAL) prior distribution for regression coefficients. The model is designed for high-dimensional applications where an approximate sparsity condition is satisfied, such that many regression coefficients are near zero after accounting for all the model…

  13. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  14. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    Science.gov (United States)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  15. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  16. Forecasting Multivariate Road Traffic Flows Using Bayesian Dynamic Graphical Models, Splines and Other Traffic Variables

    NARCIS (Netherlands)

    Anacleto, Osvaldo; Queen, Catriona; Albers, Casper J.

    Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for

  17. Identification of a Hammerstein Model of the Stretch Reflex EMG using Cubic Splines

    National Research Council Canada - National Science Library

    Dempsey, Erika

    2001-01-01

    .... The identification algorithm based on a separable least squares Levenberg-Marquardt optimization is used to identify a Hammerstein model of the stretch reflex EMG recorded from a spinal cord injured patient...

  18. Logistic Regression Model on Antenna Control Unit Autotracking Mode

    Science.gov (United States)

    2015-10-20

    412TW-PA-15240 Logistic Regression Model on Antenna Control Unit Autotracking Mode DANIEL T. LAIRD AIR FORCE TEST CENTER EDWARDS AFB, CA...OCT 15 4. TITLE AND SUBTITLE Logistic Regression Model on Antenna Control Unit Autotracking Mode 5a. CONTRACT NUMBER 5b. GRANT...alternative-hypothesis. This paper will present an Antenna Auto- tracking model using Logistic Regression modeling. This paper presents an example of

  19. STREAMFLOW AND WATER QUALITY REGRESSION MODELING ...

    African Journals Online (AJOL)

    Journal of Modeling, Design and Management of Engineering Systems ... Consistency tests, trend analyses and mathematical modeling of water quality constituents and riverflow characteristics at upstream Nekede station and downstream Obigbo station show: consistent time-trends in degree of contamination; linear and ...

  20. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  1. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  2. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  3. Higher-order numerical solutions using cubic splines

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.

  4. Alternative regression models to assess increase in childhood BMI

    OpenAIRE

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-01-01

    Abstract Background Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 childre...

  5. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  6. Stochastic Approximation Methods for Latent Regression Item Response Models

    Science.gov (United States)

    von Davier, Matthias; Sinharay, Sandip

    2010-01-01

    This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…

  7. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-07-04

    Jul 4, 2008 ... computed. Linear regression models for the prediction of left ventricular structures were established. Prediction models for ... study aimed at establishing linear regression models that could be used in the prediction ..... Is white cat hypertension associated with artenal disease or left ventricular hypertrophy?

  8. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    of the processing time and memory space required to carry out the inversion with the. 29. SCLS algorithm. ... consumption of time and memory space for the iterative computations to converge at. 54 minimum data ..... colour scale and blanking as the observed true resistivity models, for visual assessment. 163. The accuracy ...

  9. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  10. A spline polynomial model to describe serum IGF-I concentration from birth to slaughter in calves: effects of weaning age, pre-weaning concentrate feeding and breed.

    Science.gov (United States)

    Blanco, M; Casasús, I; Villalba, D

    2010-04-01

    The statistical analysis of hormones sampled throughout the production cycle is complicated because factors such as age and weight at the measuring date interfere. Spline curves constructed from pieces of low-degree, random-effects polynomials could be used for a more accurate analysis of data. Concentration of insulin-like growth factor-1 (IGF-1), weight gain, and concentrate intake of Parda de Montaña (PM) (n=27) and Pirenaica calves (n=14) were modeled with a spline model according to age at weaning, pre-weaning concentrate feeding, and breed. At birth, calves were randomly assigned to early weaning (EW) at 90 d or traditional weaning (TW) at 150 d. During lactation, half of PM calves received concentrates (S), whereas the remainder received no concentrates (NS). After weaning, calves received concentrates on an ad libitum basis until they reached a weight of 450 kg. The spline model had better likelihood than a polynomial of 6 degrees or a split-plot model. Serum IGF-1 concentration was greatly affected by age at weaning and pre-weaning concentrate feeding, but not by breed. In NS calves, IGF-1 concentration was greater in EW than in TW calves from 120 to 300 d, irrespective of breed. During lactation, S calves had greater IGF-1 concentration than NS calves. After weaning, EWNS calves reached the IGF-1 concentration of EWS calves after 4 mo on concentrates, whereas TWNS calves attained IGF-1 concentration of TWS calves after only 2 mo, because of their increased concentrate intake relative to TWS calves. Concentration of IGF-1 was positively correlated with the immediate weight gains and intake, but it was not a good predictor of performance in the long term. Copyright 2009 Elsevier Inc. All rights reserved.

  11. Interpolation of natural cubic spline

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    1992-01-01

    Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.

  12. Biological parametric mapping accounting for random regressors with regression calibration and model II regression.

    Science.gov (United States)

    Yang, Xue; Lauzon, Carolyn B; Crainiceanu, Ciprian; Caffo, Brian; Resnick, Susan M; Landman, Bennett A

    2012-09-01

    Massively univariate regression and inference in the form of statistical parametric mapping have transformed the way in which multi-dimensional imaging data are studied. In functional and structural neuroimaging, the de facto standard "design matrix"-based general linear regression model and its multi-level cousins have enabled investigation of the biological basis of the human brain. With modern study designs, it is possible to acquire multi-modal three-dimensional assessments of the same individuals--e.g., structural, functional and quantitative magnetic resonance imaging, alongside functional and ligand binding maps with positron emission tomography. Largely, current statistical methods in the imaging community assume that the regressors are non-random. For more realistic multi-parametric assessment (e.g., voxel-wise modeling), distributional consideration of all observations is appropriate. Herein, we discuss two unified regression and inference approaches, model II regression and regression calibration, for use in massively univariate inference with imaging data. These methods use the design matrix paradigm and account for both random and non-random imaging regressors. We characterize these methods in simulation and illustrate their use on an empirical dataset. Both methods have been made readily available as a toolbox plug-in for the SPM software. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions.

    Science.gov (United States)

    Ogutu, Joseph O; Schulz-Streeck, Torben; Piepho, Hans-Peter

    2012-05-21

    Genomic selection (GS) is emerging as an efficient and cost-effective method for estimating breeding values using molecular markers distributed over the entire genome. In essence, it involves estimating the simultaneous effects of all genes or chromosomal segments and combining the estimates to predict the total genomic breeding value (GEBV). Accurate prediction of GEBVs is a central and recurring challenge in plant and animal breeding. The existence of a bewildering array of approaches for predicting breeding values using markers underscores the importance of identifying approaches able to efficiently and accurately predict breeding values. Here, we comparatively evaluate the predictive performance of six regularized linear regression methods-- ridge regression, ridge regression BLUP, lasso, adaptive lasso, elastic net and adaptive elastic net-- for predicting GEBV using dense SNP markers. We predicted GEBVs for a quantitative trait using a dataset on 3000 progenies of 20 sires and 200 dams and an accompanying genome consisting of five chromosomes with 9990 biallelic SNP-marker loci simulated for the QTL-MAS 2011 workshop. We applied all the six methods that use penalty-based (regularization) shrinkage to handle datasets with far more predictors than observations. The lasso, elastic net and their adaptive extensions further possess the desirable property that they simultaneously select relevant predictive markers and optimally estimate their effects. The regression models were trained with a subset of 2000 phenotyped and genotyped individuals and used to predict GEBVs for the remaining 1000 progenies without phenotypes. Predictive accuracy was assessed using the root mean squared error, the Pearson correlation between predicted GEBVs and (1) the true genomic value (TGV), (2) the true breeding value (TBV) and (3) the simulated phenotypic values based on fivefold cross-validation (CV). The elastic net, lasso, adaptive lasso and the adaptive elastic net all had

  14. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  15. Color management with a hammer: the B-spline fitter

    Science.gov (United States)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  16. Optimal Approximation of Biquartic Polynomials by Bicubic Splines

    Science.gov (United States)

    Kačala, Viliam; Török, Csaba

    2018-02-01

    Recently an unexpected approximation property between polynomials of degree three and four was revealed within the framework of two-part approximation models in 2-norm, Chebyshev norm and Holladay seminorm. Namely, it was proved that if a two-component cubic Hermite spline's first derivative at the shared knot is computed from the first derivative of a quartic polynomial, then the spline is a clamped spline of class C2 and also the best approximant to the polynomial. Although it was known that a 2 × 2 component uniform bicubic Hermite spline is a clamped spline of class C2 if the derivatives at the shared knots are given by the first derivatives of a biquartic polynomial, the optimality of such approximation remained an open question. The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.

  17. On Characterization of Quadratic Splines

    DEFF Research Database (Denmark)

    Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong

    2005-01-01

    A quadratic spline is a differentiable piecewise quadratic function. Many problems in numerical analysis and optimization literature can be reformulated as unconstrained minimizations of quadratic splines. However, only special cases of quadratic splines are studied in the existing literature...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general......., and algorithms are developed on a case by case basis. There lacks an analytical representation of a general or even a convex quadratic spline. The current paper fills this gap by providing an analytical representation of a general quadratic spline. Furthermore, for convex quadratic spline, it is shown...

  18. An approach for quantifying small effects in regression models.

    Science.gov (United States)

    Bedrick, Edward J; Hund, Lauren

    2018-04-01

    We develop a novel approach for quantifying small effects in regression models. Our method is based on variation in the mean function, in contrast to methods that focus on regression coefficients. Our idea applies in diverse settings such as testing for a negligible trend and quantifying differences in regression functions across strata. Straightforward Bayesian methods are proposed for inference. Four examples are used to illustrate the ideas.

  19. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  20. Optimal experimental designs for inverse quadratic regression models

    OpenAIRE

    Dette, Holger; Kiss, Christine

    2007-01-01

    In this paper optimal experimental designs for inverse quadratic regression models are determined. We consider two different parameterizations of the model and investigate local optimal designs with respect to the $c$-, $D$- and $E$-criteria, which reflect various aspects of the precision of the maximum likelihood estimator for the parameters in inverse quadratic regression models. In particular it is demonstrated that for a sufficiently large design space geometric allocation rules are optim...

  1. Methods of Detecting Outliers in A Regression Analysis Model ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... This is the type of linear regression that involves only two variables one independent and one dependent plus the random error term. The simple linear regression model assumes that there is a straight line (linear) relationship between the dependent variable Y and the independent variable X. This can be.

  2. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  3. A cubic B-spline-based hybrid registration of lung CT images for a dynamic airway geometric model with large deformation.

    Science.gov (United States)

    Yin, Youbing; Hoffman, Eric A; Ding, Kai; Reinhardt, Joseph M; Lin, Ching-Long

    2011-01-07

    The goal of this study is to develop a matching algorithm that can handle large geometric changes in x-ray computed tomography (CT)-derived lung geometry occurring during deep breath maneuvers. These geometric relationships are further utilized to build a dynamic lung airway model for computational fluid dynamics (CFD) studies of pulmonary air flow. The proposed algorithm is based on a cubic B-spline-based hybrid registration framework that incorporates anatomic landmark information with intensity patterns. A sequence of invertible B-splines is composed in a multiresolution framework to ensure local invertibility of the large deformation transformation and a physiologically meaningful similarity measure is adopted to compensate for changes in voxel intensity due to inflation. Registrations are performed using the proposed approach to match six pairs of 3D CT human lung datasets. Results show that the proposed approach has the ability to match the intensity pattern and the anatomical landmarks, and ensure local invertibility for large deformation transformations. Statistical results also show that the proposed hybrid approach yields significantly improved results as compared with approaches using either landmarks or intensity alone.

  4. Analysis of ambulatory blood pressure monitor data using a hierarchical model incorporating restricted cubic splines and heterogeneous within-subject variances.

    Science.gov (United States)

    Lambert, P C; Abrams, K R; Jones, D R; Halligan, A W; Shennan, A

    2001-12-30

    Hypertensive disorders of pregnancy are associated with significant maternal and foetal morbidity. Measurement of blood pressure remains the standard way of identifying individuals at risk. There is growing interest in the use of ambulatory blood pressure monitors (ABPM), which can record an individual's blood pressure many times over a 24-hour period. From a clinical perspective interest lies in the shape of the blood pressure profile over a 24-hour period and any differences in the profile between groups. We propose a two-level hierarchical linear model incorporating all ABPM data into a single model. We contrast a classical approach with a Bayesian approach using the results of a study of 206 pregnant women who were asked to wear an ABPM for 24 hours after referral to an obstetric day unit with high blood pressure. As the main interest lies in the shape of the profile, we use restricted cubic splines to model the mean profiles. The use of restricted cubic splines provides a flexible way to model the mean profiles and to make comparisons between groups. From examining the data and the fit of the model it is apparent that there were heterogeneous within-subject variances in that some women tend to have more variable blood pressure than others. Within the Bayesian framework it is relatively easy to incorporate a random effect to model the between-subject variation in the within-subject variances. Although there is substantial heterogeneity in the within-subject variances, allowing for this in the model has surprisingly little impact on the estimates of the mean profiles or their confidence/credible intervals. We thus demonstrate a powerful method for analysis of ABPM data and also demonstrate how heterogeneous within-subject variances can be modelled from a Bayesian perspective. Copyright 2001 John Wiley & Sons, Ltd.

  5. Application of random regression models to the genetic evaluation ...

    African Journals Online (AJOL)

    The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

  6. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  7. Regression Model Optimization for the Analysis of Experimental Data

    Science.gov (United States)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  8. Real estate value prediction using multivariate regression models

    Science.gov (United States)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  9. LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Dokken

    2015-08-01

    Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  10. Alternative regression models to assess increase in childhood BMI

    Directory of Open Access Journals (Sweden)

    Mansmann Ulrich

    2008-09-01

    Full Text Available Abstract Background Body mass index (BMI data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs, quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS. We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. Results GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. Conclusion GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  11. Alternative regression models to assess increase in childhood BMI.

    Science.gov (United States)

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-09-08

    Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  12. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    Science.gov (United States)

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-02-10

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are employed. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multi-body simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered due to their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using three-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a two-muscle system, it can easily be extended to musculoskeletal systems with three or more muscles. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    Science.gov (United States)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  14. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  15. Smoothing quadratic and cubic splines

    OpenAIRE

    Oukropcová, Kateřina

    2014-01-01

    Title: Smoothing quadratic and cubic splines Author: Kateřina Oukropcová Department: Department of Numerical Mathematics Supervisor: RNDr. Václav Kučera, Ph.D., Department of Numerical Mathematics Abstract: The aim of this bachelor thesis is to study the topic of smoothing quadratic and cubic splines on uniform partitions. First, we define the basic con- cepts in the field of splines, next we introduce interpolating splines with a focus on their minimizing properties for odd degree and quadra...

  16. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  17. Buffalos milk yield analysis using random regression models

    Directory of Open Access Journals (Sweden)

    A.S. Schierholt

    2010-02-01

    Full Text Available Data comprising 1,719 milk yield records from 357 females (predominantly Murrah breed, daughters of 110 sires, with births from 1974 to 2004, obtained from the Programa de Melhoramento Genético de Bubalinos (PROMEBUL and from records of EMBRAPA Amazônia Oriental - EAO herd, located in Belém, Pará, Brazil, were used to compare random regression models for estimating variance components and predicting breeding values of the sires. The data were analyzed by different models using the Legendre’s polynomial functions from second to fourth orders. The random regression models included the effects of herd-year, month of parity date of the control; regression coefficients for age of females (in order to describe the fixed part of the lactation curve and random regression coefficients related to the direct genetic and permanent environment effects. The comparisons among the models were based on the Akaike Infromation Criterion. The random effects regression model using third order Legendre’s polynomials with four classes of the environmental effect were the one that best described the additive genetic variation in milk yield. The heritability estimates varied from 0.08 to 0.40. The genetic correlation between milk yields in younger ages was close to the unit, but in older ages it was low.

  18. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  19. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  20. Regression models for estimating charcoal yield in a Eucalyptus ...

    African Journals Online (AJOL)

    ... dbh2H, and the product of dbh and merchantable height [(dbh)MH] as independent variables. Results of residual analysis showed that the models satisfied all the assumptions of regression analysis. Keywords: Models, charcoal production, biomass, Eucalyptus, arid, anergy, allometric. Bowen Journal of Agriculture Vol.

  1. CICAAR - Convolutive ICA with an Auto-Regressive Inverse Model

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Hansen, Lars Kai

    2004-01-01

    We invoke an auto-regressive IIR inverse model for convolutive ICA and derive expressions for the likelihood and its gradient. We argue that optimization will give a stable inverse. When there are more sensors than sources the mixing model parameters are estimated in a second step by least squares...

  2. Default Bayes Factors for Model Selection in Regression

    Science.gov (United States)

    Rouder, Jeffrey N.; Morey, Richard D.

    2012-01-01

    In this article, we present a Bayes factor solution for inference in multiple regression. Bayes factors are principled measures of the relative evidence from data for various models or positions, including models that embed null hypotheses. In this regard, they may be used to state positive evidence for a lack of an effect, which is not possible…

  3. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  4. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  5. Visualisation and interpretation of Support Vector Regression models.

    Science.gov (United States)

    Ustün, B; Melssen, W J; Buydens, L M C

    2007-07-09

    This paper introduces a technique to visualise the information content of the kernel matrix and a way to interpret the ingredients of the Support Vector Regression (SVR) model. Recently, the use of Support Vector Machines (SVM) for solving classification (SVC) and regression (SVR) problems has increased substantially in the field of chemistry and chemometrics. This is mainly due to its high generalisation performance and its ability to model non-linear relationships in a unique and global manner. Modeling of non-linear relationships will be enabled by applying a kernel function. The kernel function transforms the input data, usually non-linearly related to the associated output property, into a high dimensional feature space where the non-linear relationship can be represented in a linear form. Usually, SVMs are applied as a black box technique. Hence, the model cannot be interpreted like, e.g., Partial Least Squares (PLS). For example, the PLS scores and loadings make it possible to visualise and understand the driving force behind the optimal PLS machinery. In this study, we have investigated the possibilities to visualise and interpret the SVM model. Here, we exclusively have focused on Support Vector Regression to demonstrate these visualisation and interpretation techniques. Our observations show that we are now able to turn a SVR black box model into a transparent and interpretable regression modeling technique.

  6. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  7. Mapping soil organic carbon stocks by robust geostatistical and boosted regression models

    Science.gov (United States)

    Nussbaum, Madlene; Papritz, Andreas; Baltensweiler, Andri; Walthert, Lorenz

    2013-04-01

    Carbon (C) sequestration in forests offsets greenhouse gas emissions. Therefore, quantifying C stocks and fluxes in forest ecosystems is of interest for greenhouse gas reporting according to the Kyoto protocol. In Switzerland, the National Forest Inventory offers comprehensive data to quantify the aboveground forest biomass and its change in time. Estimating stocks of soil organic C (SOC) in forests is more difficult because the variables needed to quantify stocks vary strongly in space and precise quantification of some of them is very costly. Based on data from 1'033 plots we modeled SOC stocks of the organic layer and the mineral soil to depths of 30 cm and 100 cm for the Swiss forested area. For the statistical modeling a broad range of covariates were available: Climate data (e. g. precipitation, temperature), two elevation models (resolutions 25 and 2 m) with respective terrain attributes and spectral reflectance data representing vegetation. Furthermore, the main mapping units of an overview soil map and a coarse scale geological map were used to coarsely represent the parent material of the soils. The selection of important covariates for SOC stocks modeling out of a large set was a major challenge for the statistical modeling. We used two approaches to deal with this problem: 1) A robust restricted maximum likelihood method to fit linear regression model with spatially correlated errors. The large number of covariates was first reduced by LASSO (Least Absolute Shrinkage and Selection Operator) and then further narrowed down to a parsimonious set of important covariates by cross-validation of the robustly fitted model. To account for nonlinear dependencies of the response on the covariates interaction terms of the latter were included in model if this improved the fit. 2) A boosted structured regression model with componentwise linear least squares or componentwise smoothing splines as base procedures. The selection of important covariates was done by the

  8. Regression modeling strategies with applications to linear models, logistic and ordinal regression, and survival analysis

    CERN Document Server

    Harrell , Jr , Frank E

    2015-01-01

    This highly anticipated second edition features new chapters and sections, 225 new references, and comprehensive R software. In keeping with the previous edition, this book is about the art and science of data analysis and predictive modeling, which entails choosing and using multiple tools. Instead of presenting isolated techniques, this text emphasizes problem solving strategies that address the many issues arising when developing multivariable models using real data and not standard textbook examples. It includes imputation methods for dealing with missing data effectively, methods for fitting nonlinear relationships and for making the estimation of transformations a formal part of the modeling process, methods for dealing with "too many variables to analyze and not enough observations," and powerful model validation techniques based on the bootstrap.  The reader will gain a keen understanding of predictive accuracy, and the harm of categorizing continuous predictors or outcomes.  This text realistically...

  9. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  10. The art of regression modeling in road safety

    CERN Document Server

    Hauer, Ezra

    2015-01-01

    This unique book explains how to fashion useful regression models from commonly available data to erect models essential for evidence-based road safety management and research. Composed from techniques and best practices presented over many years of lectures and workshops, The Art of Regression Modeling in Road Safety illustrates that fruitful modeling cannot be done without substantive knowledge about the modeled phenomenon. Class-tested in courses and workshops across North America, the book is ideal for professionals, researchers, university professors, and graduate students with an interest in, or responsibilities related to, road safety. This book also: · Presents for the first time a powerful analytical tool for road safety researchers and practitioners · Includes problems and solutions in each chapter as well as data and spreadsheets for running models and PowerPoint presentation slides · Features pedagogy well-suited for graduate courses and workshops including problems, solutions, and PowerPoint p...

  11. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  12. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  13. Detecting influential observations in nonlinear regression modeling of groundwater flow

    Science.gov (United States)

    Yager, Richard M.

    1998-01-01

    Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.

  14. The EH Interpolation Spline and Its Approximation

    Directory of Open Access Journals (Sweden)

    Jin Xie

    2014-01-01

    Full Text Available A new interpolation spline with two parameters, called EH interpolation spline, is presented in this paper, which is the extension of the standard cubic Hermite interpolation spline, and inherits the same properties of the standard cubic Hermite interpolation spline. Given the fixed interpolation conditions, the shape of the proposed splines can be adjusted by changing the values of the parameters. Also, the introduced spline could approximate to the interpolated function better than the standard cubic Hermite interpolation spline and the quartic Hermite interpolation splines with single parameter by a new algorithm.

  15. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  16. Logistic regression for risk factor modelling in stuttering research.

    Science.gov (United States)

    Reed, Phil; Wu, Yaqionq

    2013-06-01

    To outline the uses of logistic regression and other statistical methods for risk factor analysis in the context of research on stuttering. The principles underlying the application of a logistic regression are illustrated, and the types of questions to which such a technique has been applied in the stuttering field are outlined. The assumptions and limitations of the technique are discussed with respect to existing stuttering research, and with respect to formulating appropriate research strategies to accommodate these considerations. Finally, some alternatives to the approach are briefly discussed. The way the statistical procedures are employed are demonstrated with some hypothetical data. Research into several practical issues concerning stuttering could benefit if risk factor modelling were used. Important examples are early diagnosis, prognosis (whether a child will recover or persist) and assessment of treatment outcome. After reading this article you will: (a) Summarize the situations in which logistic regression can be applied to a range of issues about stuttering; (b) Follow the steps in performing a logistic regression analysis; (c) Describe the assumptions of the logistic regression technique and the precautions that need to be checked when it is employed; (d) Be able to summarize its advantages over other techniques like estimation of group differences and simple regression. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Applications of some discrete regression models for count data

    Directory of Open Access Journals (Sweden)

    B. M. Golam Kibria

    2006-01-01

    Full Text Available In this paper we have considered several regression models to fit the count data that encounter in the field of Biometrical, Environmental, Social Sciences and Transportation Engineering. We have fitted Poisson (PO, Negative Binomial (NB, Zero-Inflated Poisson (ZIP and Zero-Inflated Negative Binomial (ZINB regression models to run-off-road (ROR crash data which collected on arterial roads in south region (rural of Florida State. To compare the performance of these models, we analyzed data with moderate to high percentage of zero counts. Because the variances were almost three times greater than the means, it appeared that both NB and ZINB models performed better than PO and ZIP models for the zero inflated and over dispersed count data.

  18. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  19. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  1. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  2. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  3. Linearity and Misspecification Tests for Vector Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    Teräsvirta, Timo; Yang, Yukai

    The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small...

  4. Application of multilinear regression analysis in modeling of soil ...

    African Journals Online (AJOL)

    The application of Multi-Linear Regression Analysis (MLRA) model for predicting soil properties in Calabar South offers a technical guide and solution in foundation designs problems in the area. Forty-five soil samples were collected from fifteen different boreholes at a different depth and 270 tests were carried out for CBR, ...

  5. Regression Models and Experimental Designs : A Tutorial for Simulation Analaysts

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2006-01-01

    This tutorial explains the basics of linear regression models. especially low-order polynomials. and the corresponding statistical designs. namely, designs of resolution III, IV, V, and Central Composite Designs (CCDs).This tutorial assumes 'white noise', which means that the residuals of the fitted

  6. application of multilinear regression analysis in modeling of soil

    African Journals Online (AJOL)

    Windows User

    APPLICATION OF MULTILINEAR REGRESSION ANALYSIS IN MODELING OF. SOIL PROPERTIES FOR GEOTECHNICAL CIVIL ENGINEERING WORKS. IN CALABAR SOUTH. J. G. Egbe1, D. E. Ewa2, S. E. Ubi3, G. B. Ikwa4 and O. O. Tumenayo5. 1, 2, 3, 4, DEPT. OF CIVIL ENGINEERING, CROSS RIVER UNIV.

  7. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    A binary logistic regression model with complex sampling design of unmet need for family planning among all women aged (15-49) in Ethiopia. ... Conclusion: The key determinants of unmet need family planning in Ethiopia were residence, age, marital-status, education, household members, birth-events and number of ...

  8. Transpiration of glasshouse rose crops: evaluation of regression models

    NARCIS (Netherlands)

    Baas, R.; Rijssel, van E.

    2006-01-01

    Regression models of transpiration (T) based on global radiation inside the greenhouse (G), with or without energy input from heating pipes (Eh) and/or vapor pressure deficit (VPD) were parameterized. Therefore, data on T, G, temperatures from air, canopy and heating pipes, and VPD from both a

  9. A simultaneous confidence band for sparse longitudinal regression

    KAUST Repository

    Ma, Shujie

    2012-01-01

    Functional data analysis has received considerable recent attention and a number of successful applications have been reported. In this paper, asymptotically simultaneous confidence bands are obtained for the mean function of the functional regression model, using piecewise constant spline estimation. Simulation experiments corroborate the asymptotic theory. The confidence band procedure is illustrated by analyzing CD4 cell counts of HIV infected patients.

  10. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template

    Directory of Open Access Journals (Sweden)

    Rachel Jewkes

    2018-02-01

    Full Text Available The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries.

  11. Transductive Ridge Regression in Structure-activity Modeling.

    Science.gov (United States)

    Marcou, Gilles; Delouis, Grace; Mokshyna, Olena; Horvath, Dragos; Lachiche, Nicolas; Varnek, Alexandre

    2018-01-01

    In this article we consider the application of the Transductive Ridge Regression (TRR) approach to structure-activity modeling. An original procedure of the TRR parameters optimization is suggested. Calculations performed on 3 different datasets involving two types of descriptors demonstrated that TRR outperforms its non-transductive analogue (Ridge Regression) in more than 90 % of cases. The most significant transductive effect was observed for small datasets. This suggests that transduction may be particularly useful when the data are expensive or difficult to collect. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  13. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause......-specific hazards. Another recent approach is to directly model the cumulative incidence by a proportional model (Fine and Gray, J Am Stat Assoc 94:496-509, 1999), and then obtain direct estimates of how covariates influences the cumulative incidence curve. We consider a simple and flexible class of regression...

  14. Autoregressive Model Using Fuzzy C-Regression Model Clustering for Traffic Modeling

    Science.gov (United States)

    Tanaka, Fumiaki; Suzuki, Yukinori; Maeda, Junji

    A robust traffic modeling is required to perform an effective congestion control for the broad band digital network. An autoregressive model using a fuzzy c-regression model (FCRM) clustering is proposed for a traffic modeling. This is a simpler modeling method than previous methods. The experiments show that the proposed method is more robust for traffic modeling than the previous method.

  15. Electricity consumption forecasting in Italy using linear regression models

    International Nuclear Information System (INIS)

    Bianco, Vincenzo; Manca, Oronzio; Nardini, Sergio

    2009-01-01

    The influence of economic and demographic variables on the annual electricity consumption in Italy has been investigated with the intention to develop a long-term consumption forecasting model. The time period considered for the historical data is from 1970 to 2007. Different regression models were developed, using historical electricity consumption, gross domestic product (GDP), gross domestic product per capita (GDP per capita) and population. A first part of the paper considers the estimation of GDP, price and GDP per capita elasticities of domestic and non-domestic electricity consumption. The domestic and non-domestic short run price elasticities are found to be both approximately equal to -0.06, while long run elasticities are equal to -0.24 and -0.09, respectively. On the contrary, the elasticities of GDP and GDP per capita present higher values. In the second part of the paper, different regression models, based on co-integrated or stationary data, are presented. Different statistical tests are employed to check the validity of the proposed models. A comparison with national forecasts, based on complex econometric models, such as Markal-Time, was performed, showing that the developed regressions are congruent with the official projections, with deviations of ±1% for the best case and ±11% for the worst. These deviations are to be considered acceptable in relation to the time span taken into account. (author)

  16. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  17. Modeling energy expenditure in children and adolescents using quantile regression.

    Science.gov (United States)

    Yang, Yunwen; Adolph, Anne L; Puyau, Maurice R; Vohra, Firoz A; Butte, Nancy F; Zakeri, Issa F

    2013-07-15

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obese children. First, QR models will be developed to predict minute-by-minute awake EE at different quantile levels based on heart rate (HR) and physical activity (PA) accelerometry counts, and child characteristics of age, sex, weight, and height. Second, the QR models will be used to evaluate the covariate effects of weight, PA, and HR across the conditional EE distribution. QR and ordinary least squares (OLS) regressions are estimated in 109 children, aged 5-18 yr. QR modeling of EE outperformed OLS regression for both nonobese and obese populations. Average prediction errors for QR compared with OLS were not only smaller at the median τ = 0.5 (18.6 vs. 21.4%), but also substantially smaller at the tails of the distribution (10.2 vs. 39.2% at τ = 0.1 and 8.7 vs. 19.8% at τ = 0.9). Covariate effects of weight, PA, and HR on EE for the nonobese and obese children differed across quantiles (P effects of weight, PA, and HR on EE in nonobese and obese children.

  18. A mathematical model of tumour angiogenesis: growth, regression and regrowth.

    Science.gov (United States)

    Vilanova, Guillermo; Colominas, Ignasi; Gomez, Hector

    2017-01-01

    Cancerous tumours have the ability to recruit new blood vessels through a process called angiogenesis. By stimulating vascular growth, tumours get connected to the circulatory system, receive nutrients and open a way to colonize distant organs. Tumour-induced vascular networks become unstable in the absence of tumour angiogenic factors (TAFs). They may undergo alternating stages of growth, regression and regrowth. Following a phase-field methodology, we propose a model of tumour angiogenesis that reproduces the aforementioned features and highlights the importance of vascular regression and regrowth. In contrast with previous theories which focus on vessel remodelling due to the absence of flow, we model an alternative regression mechanism based on the dependency of tumour-induced vascular networks on TAFs. The model captures capillaries at full scale, the plastic dynamics of tumour-induced vessel networks at long time scales, and shows the key role played by filopodia during angiogenesis. The predictions of our model are in agreement with in vivo experiments and may prove useful for the design of antiangiogenic therapies. © 2017 The Author(s).

  19. Resampling procedures to validate dendro-auxometric regression models

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Regression analysis has a large use in several sectors of forest research. The validation of a dendro-auxometric model is a basic step in the building of the model itself. The more a model resists to attempts of demonstrating its groundlessness, the more its reliability increases. In the last decades many new theories, that quite utilizes the calculation speed of the calculators, have been formulated. Here we show the results obtained by the application of a bootsprap resampling procedure as a validation tool.

  20. A mixed-effects multinomial logistic regression model.

    Science.gov (United States)

    Hedeker, Donald

    2003-05-15

    A mixed-effects multinomial logistic regression model is described for analysis of clustered or longitudinal nominal or ordinal response data. The model is parameterized to allow flexibility in the choice of contrasts used to represent comparisons across the response categories. Estimation is achieved using a maximum marginal likelihood (MML) solution that uses quadrature to numerically integrate over the distribution of random effects. An analysis of a psychiatric data set, in which homeless adults with serious mental illness are repeatedly classified in terms of their living arrangement, is used to illustrate features of the model. Copyright 2003 by John Wiley & Sons, Ltd.

  1. Optimization of straight-sided spline design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...

  2. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  3. Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap

    OpenAIRE

    Emmanuel Flachaire

    2005-01-01

    In regression models, appropriate bootstrap methods for inference robust to heteroskedasticity of unknown form are the wild bootstrap and the pairs bootstrap. The finite sample performance of a heteroskedastic-robust test is investigated with Monte Carlo experiments. The simulation results suggest that one specific version of the wild bootstrap outperforms the other versions of the wild bootstrap and of the pairs bootstrap. It is the only one for which the bootstrap test gives always better r...

  4. Correlation-regression model for physico-chemical quality of ...

    African Journals Online (AJOL)

    abusaad

    3Department of Zoology, Gulbarga University Gulbarga, India. Accepted 2 July, 2012 ... multiple R2 value of 0.999 indicated that 99.9% variability in observed EC could be ascribed to Clˉ (76%),. HCO3. ˉ. (12.5%), NO3. - (10.3%) and SO4. 2- (1.1%). Multiple regression models can predict EC at 5% level of significance.

  5. The use of Bayesian nonlinear regression techniques for the modelling of the retention behaviour of volatile components of Artemisia species.

    Science.gov (United States)

    Jalali-Heravi, M; Mani-Varnosfaderani, A; Taherinia, D; Mahmoodi, M M

    2012-07-01

    The main aim of this work was to assess the ability of Bayesian multivariate adaptive regression splines (BMARS) and Bayesian radial basis function (BRBF) techniques for modelling the gas chromatographic retention indices of volatile components of Artemisia species. A diverse set of molecular descriptors was calculated and used as descriptor pool for modelling the retention indices. The ability of BMARS and BRBF techniques was explored for the selection of the most relevant descriptors and proper basis functions for modelling. The results revealed that BRBF technique is more reproducible than BMARS for modelling the retention indices and can be used as a method for variable selection and modelling in quantitative structure-property relationship (QSPR) studies. It is also concluded that the Markov chain Monte Carlo (MCMC) search engine, implemented in BRBF algorithm, is a suitable method for selecting the most important features from a vast number of them. The values of correlation between the calculated retention indices and the experimental ones for the training and prediction sets (0.935 and 0.902, respectively) revealed the prediction power of the BRBF model in estimating the retention index of volatile components of Artemisia species.

  6. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  7. Extended cox regression model: The choice of timefunction

    Science.gov (United States)

    Isik, Hatice; Tutkun, Nihal Ata; Karasoy, Durdu

    2017-07-01

    Cox regression model (CRM), which takes into account the effect of censored observations, is one the most applicative and usedmodels in survival analysis to evaluate the effects of covariates. Proportional hazard (PH), requires a constant hazard ratio over time, is the assumptionofCRM. Using extended CRM provides the test of including a time dependent covariate to assess the PH assumption or an alternative model in case of nonproportional hazards. In this study, the different types of real data sets are used to choose the time function and the differences between time functions are analyzed and discussed.

  8. Multivariate Frequency-Severity Regression Models in Insurance

    Directory of Open Access Journals (Sweden)

    Edward W. Frees

    2016-02-01

    Full Text Available In insurance and related industries including healthcare, it is common to have several outcome measures that the analyst wishes to understand using explanatory variables. For example, in automobile insurance, an accident may result in payments for damage to one’s own vehicle, damage to another party’s vehicle, or personal injury. It is also common to be interested in the frequency of accidents in addition to the severity of the claim amounts. This paper synthesizes and extends the literature on multivariate frequency-severity regression modeling with a focus on insurance industry applications. Regression models for understanding the distribution of each outcome continue to be developed yet there now exists a solid body of literature for the marginal outcomes. This paper contributes to this body of literature by focusing on the use of a copula for modeling the dependence among these outcomes; a major advantage of this tool is that it preserves the body of work established for marginal models. We illustrate this approach using data from the Wisconsin Local Government Property Insurance Fund. This fund offers insurance protection for (i property; (ii motor vehicle; and (iii contractors’ equipment claims. In addition to several claim types and frequency-severity components, outcomes can be further categorized by time and space, requiring complex dependency modeling. We find significant dependencies for these data; specifically, we find that dependencies among lines are stronger than the dependencies between the frequency and average severity within each line.

  9. Augmented Beta rectangular regression models: A Bayesian perspective.

    Science.gov (United States)

    Wang, Jue; Luo, Sheng

    2016-01-01

    Mixed effects Beta regression models based on Beta distributions have been widely used to analyze longitudinal percentage or proportional data ranging between zero and one. However, Beta distributions are not flexible to extreme outliers or excessive events around tail areas, and they do not account for the presence of the boundary values zeros and ones because these values are not in the support of the Beta distributions. To address these issues, we propose a mixed effects model using Beta rectangular distribution and augment it with the probabilities of zero and one. We conduct extensive simulation studies to assess the performance of mixed effects models based on both the Beta and Beta rectangular distributions under various scenarios. The simulation studies suggest that the regression models based on Beta rectangular distributions improve the accuracy of parameter estimates in the presence of outliers and heavy tails. The proposed models are applied to the motivating Neuroprotection Exploratory Trials in Parkinson's Disease (PD) Long-term Study-1 (LS-1 study, n = 1741), developed by The National Institute of Neurological Disorders and Stroke Exploratory Trials in Parkinson's Disease (NINDS NET-PD) network. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Tungsten anode spectral model using interpolating cubic splines: unfiltered x-ray spectra from 20 kV to 640 kV.

    Science.gov (United States)

    Hernandez, Andrew M; Boone, John M

    2014-04-01

    Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R(2)) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, "Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector," Phys. Med. Biol. 24, 505-517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 k

  11. Tungsten anode spectral model using interpolating cubic splines: Unfiltered x-ray spectra from 20 kV to 640 kV

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, Andrew M. [Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States); Boone, John M., E-mail: john.boone@ucdmc.ucdavis.edu [Departments of Radiology and Biomedical Engineering, Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States)

    2014-04-15

    Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R{sup 2}) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9

  12. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  13. [Interaction between continuous variables in logistic regression model].

    Science.gov (United States)

    Qiu, Hong; Yu, Ignatius Tak-Sun; Tse, Lap Ah; Wang, Xiao-rong; Fu, Zhen-ming

    2010-07-01

    Rothman argued that interaction estimated as departure from additivity better reflected the biological interaction. In a logistic regression model, the product term reflects the interaction as departure from multiplicativity. So far, literature on estimating interaction regarding an additive scale using logistic regression was only focusing on two dichotomous factors. The objective of the present report was to provide a method to examine the interaction as departure from additivity between two continuous variables or between one continuous variable and one categorical variable. We used data from a lung cancer case-control study among males in Hong Kong as an example to illustrate the bootstrap re-sampling method for calculating the corresponding confidence intervals. Free software R (Version 2.8.1) was used to estimate interaction on the additive scale.

  14. Modeling the number of car theft using Poisson regression

    Science.gov (United States)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  15. Dynamic logistic regression and dynamic model averaging for binary classification.

    Science.gov (United States)

    McCormick, Tyler H; Raftery, Adrian E; Madigan, David; Burd, Randall S

    2012-03-01

    We propose an online binary classification procedure for cases when there is uncertainty about the model to use and parameters within a model change over time. We account for model uncertainty through dynamic model averaging, a dynamic extension of Bayesian model averaging in which posterior model probabilities may also change with time. We apply a state-space model to the parameters of each model and we allow the data-generating model to change over time according to a Markov chain. Calibrating a "forgetting" factor accommodates different levels of change in the data-generating mechanism. We propose an algorithm that adjusts the level of forgetting in an online fashion using the posterior predictive distribution, and so accommodates various levels of change at different times. We apply our method to data from children with appendicitis who receive either a traditional (open) appendectomy or a laparoscopic procedure. Factors associated with which children receive a particular type of procedure changed substantially over the 7 years of data collection, a feature that is not captured using standard regression modeling. Because our procedure can be implemented completely online, future data collection for similar studies would require storing sensitive patient information only temporarily, reducing the risk of a breach of confidentiality. © 2011, The International Biometric Society.

  16. Development and Application of Nonlinear Land-Use Regression Models

    Science.gov (United States)

    Champendal, Alexandre; Kanevski, Mikhail; Huguenot, Pierre-Emmanuel

    2014-05-01

    The problem of air pollution modelling in urban zones is of great importance both from scientific and applied points of view. At present there are several fundamental approaches either based on science-based modelling (air pollution dispersion) or on the application of space-time geostatistical methods (e.g. family of kriging models or conditional stochastic simulations). Recently, there were important developments in so-called Land Use Regression (LUR) models. These models take into account geospatial information (e.g. traffic network, sources of pollution, average traffic, population census, land use, etc.) at different scales, for example, using buffering operations. Usually the dimension of the input space (number of independent variables) is within the range of (10-100). It was shown that LUR models have some potential to model complex and highly variable patterns of air pollution in urban zones. Most of LUR models currently used are linear models. In the present research the nonlinear LUR models are developed and applied for Geneva city. Mainly two nonlinear data-driven models were elaborated: multilayer perceptron and random forest. An important part of the research deals also with a comprehensive exploratory data analysis using statistical, geostatistical and time series tools. Unsupervised self-organizing maps were applied to better understand space-time patterns of the pollution. The real data case study deals with spatial-temporal air pollution data of Geneva (2002-2011). Nitrogen dioxide (NO2) has caught our attention. It has effects on human health and on plants; NO2 contributes to the phenomenon of acid rain. The negative effects of nitrogen dioxides on plants are the reduction of the growth, production and pesticide resistance. And finally, the effects on materials: nitrogen dioxide increases the corrosion. The data used for this study consist of a set of 106 NO2 passive sensors. 80 were used to build the models and the remaining 36 have constituted

  17. Dynamic Regression Intervention Modeling for the Malaysian Daily Load

    Directory of Open Access Journals (Sweden)

    Fadhilah Abdrazak

    2014-05-01

    Full Text Available Malaysia is a unique country due to having both fixed and moving holidays.  These moving holidays may overlap with other fixed holidays and therefore, increase the complexity of the load forecasting activities. The errors due to holidays’ effects in the load forecasting are known to be higher than other factors.  If these effects can be estimated and removed, the behavior of the series could be better viewed.  Thus, the aim of this paper is to improve the forecasting errors by using a dynamic regression model with intervention analysis.   Based on the linear transfer function method, a daily load model consists of either peak or average is developed.  The developed model outperformed the seasonal ARIMA model in estimating the fixed and moving holidays’ effects and achieved a smaller Mean Absolute Percentage Error (MAPE in load forecast.

  18. Modeling of the Monthly Rainfall-Runoff Process Through Regressions

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2014-10-01

    Full Text Available To solve the problems associated with the assessment of water resources of a river, the modeling of the rainfall-runoff process (RRP allows the deduction of runoff missing data and to extend its record, since generally the information available on precipitation is larger. It also enables the estimation of inputs to reservoirs, when their building led to the suppression of the gauging station. The simplest mathematical model that can be set for the RRP is the linear regression or curve on a monthly basis. Such a model is described in detail and is calibrated with the simultaneous record of monthly rainfall and runoff in Ballesmi hydrometric station, which covers 35 years. Since the runoff of this station has an important contribution from the spring discharge, the record is corrected first by removing that contribution. In order to do this a procedure was developed based either on the monthly average regional runoff coefficients or on nearby and similar watershed; in this case the Tancuilín gauging station was used. Both stations belong to the Partial Hydrologic Region No. 26 (Lower Rio Panuco and are located within the state of San Luis Potosi, México. The study performed indicates that the monthly regression model, due to its conceptual approach, faithfully reproduces monthly average runoff volumes and achieves an excellent approximation in relation to the dispersion, proved by calculation of the means and standard deviations.

  19. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  20. Radiation heat transfer model using Monte Carlo ray tracing method on hierarchical ortho-Cartesian meshes and non-uniform rational basis spline surfaces for description of boundaries

    Directory of Open Access Journals (Sweden)

    Kuczyński Paweł

    2014-06-01

    Full Text Available The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD. The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.

  1. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    roots in meeting the evident need for improved estimators in spatial interpolation. Technical advances in regression analysis during the 1970s embraced the development of regression diagnostics and consequent attention to outliers; the recognition of problems caused by correlated predictors, and the subsequent introduction of ridge regression to overcome them; and techniques for fitting errors-in-variables and mixture models. Improvements in computational power have enabled ever more computer-intensive methods to be applied. These include algorithms which are robust in the presence of outliers, for example Rousseeuw's 1984 Least Median Squares; nonparametric smoothing methods, such as kernel-functions, splines and Cleveland's 1979 LOcally WEighted Scatterplot Smoother (LOWESS); and the Classification and Regression Tree (CART) technique of Breiman and others in 1984. Despite a continuing improvement in the rate of technology-transfer from the statistical to the earth-science community, despite an abrupt drop to a time-lag of about 10 years following the introduction of digital computers, these more recent developments are only just beginning to penetrate beyond the research community of earth scientists. Examples of applications to problem-solving in the earth sciences are given

  2. Interpreting parameters in the logistic regression model with random effects

    DEFF Research Database (Denmark)

    Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

    2000-01-01

    interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

  3. Learning Supervised Topic Models for Classification and Regression from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete

    2017-01-01

    annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression......The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most...... problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages...

  4. A new inverse regression model applied to radiation biodosimetry

    Science.gov (United States)

    Higueras, Manuel; Puig, Pedro; Ainsbury, Elizabeth A.; Rothkamm, Kai

    2015-01-01

    Biological dosimetry based on chromosome aberration scoring in peripheral blood lymphocytes enables timely assessment of the ionizing radiation dose absorbed by an individual. Here, new Bayesian-type count data inverse regression methods are introduced for situations where responses are Poisson or two-parameter compound Poisson distributed. Our Poisson models are calculated in a closed form, by means of Hermite and negative binomial (NB) distributions. For compound Poisson responses, complete and simplified models are provided. The simplified models are also expressible in a closed form and involve the use of compound Hermite and compound NB distributions. Three examples of applications are given that demonstrate the usefulness of these methodologies in cytogenetic radiation biodosimetry and in radiotherapy. We provide R and SAS codes which reproduce these examples. PMID:25663804

  5. Diagnostic Measures for the Cox Regression Model with Missing Covariates.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Chen, Ming-Hui

    2015-12-01

    This paper investigates diagnostic measures for assessing the influence of observations and model misspecification in the presence of missing covariate data for the Cox regression model. Our diagnostics include case-deletion measures, conditional martingale residuals, and score residuals. The Q-distance is proposed to examine the effects of deleting individual observations on the estimates of finite-dimensional and infinite-dimensional parameters. Conditional martingale residuals are used to construct goodness of fit statistics for testing possible misspecification of the model assumptions. A resampling method is developed to approximate the p -values of the goodness of fit statistics. Simulation studies are conducted to evaluate our methods, and a real data set is analyzed to illustrate their use.

  6. Fuzzy regression modeling for tool performance prediction and degradation detection.

    Science.gov (United States)

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  7. Nonrigid registration with free-form deformation model of multilevel uniform cubic B-splines: application to image registration and distortion correction of spectral image cubes.

    Science.gov (United States)

    Eckhard, Timo; Eckhard, Jia; Valero, Eva M; Nieves, Juan Luis

    2014-06-10

    In spectral imaging, spatial and spectral information of an image scene are combined. There exist several technologies that allow the acquisition of this kind of data. Depending on the optical components used in the spectral imaging systems, misalignment between image channels can occur. Further, the projection of some systems deviates from that of a perfect optical lens system enough that a distortion of scene content in the images becomes apparent to the observer. Correcting distortion and misalignment can be complicated for spectral image data if they are different at each image channel. In this work, we propose an image registration and distortion correction scheme for spectral image cubes that is based on a free-form deformation model of uniform cubic B-splines with multilevel grid refinement. This scheme is adaptive with respect to image size, degree of misalignment, and degree of distortion, and in that sense is superior to previous approaches. We support our proposed scheme with empirical data from a Bragg-grating-based hyperspectral imager, for which a registration accuracy of approximately one pixel was achieved.

  8. ORTH: R and SAS software for regression models of correlated binary data based on orthogonalized residuals and alternating logistic regressions.

    Science.gov (United States)

    By, Kunthel; Qaqish, Bahjat F; Preisser, John S; Perin, Jamie; Zink, Richard C

    2014-02-01

    This article describes a new software for modeling correlated binary data based on orthogonalized residuals, a recently developed estimating equations approach that includes, as a special case, alternating logistic regressions. The software is flexible with respect to fitting in that the user can choose estimating equations for association models based on alternating logistic regressions or orthogonalized residuals, the latter choice providing a non-diagonal working covariance matrix for second moment parameters providing potentially greater efficiency. Regression diagnostics based on this method are also implemented in the software. The mathematical background is briefly reviewed and the software is applied to medical data sets. Published by Elsevier Ireland Ltd.

  9. Regression Models for Predicting Force Coefficients of Aerofoils

    Directory of Open Access Journals (Sweden)

    Mohammed ABDUL AKBAR

    2015-09-01

    Full Text Available Renewable sources of energy are attractive and advantageous in a lot of different ways. Among the renewable energy sources, wind energy is the fastest growing type. Among wind energy converters, Vertical axis wind turbines (VAWTs have received renewed interest in the past decade due to some of the advantages they possess over their horizontal axis counterparts. VAWTs have evolved into complex 3-D shapes. A key component in predicting the output of VAWTs through analytical studies is obtaining the values of lift and drag coefficients which is a function of shape of the aerofoil, ‘angle of attack’ of wind and Reynolds’s number of flow. Sandia National Laboratories have carried out extensive experiments on aerofoils for the Reynolds number in the range of those experienced by VAWTs. The volume of experimental data thus obtained is huge. The current paper discusses three Regression analysis models developed wherein lift and drag coefficients can be found out using simple formula without having to deal with the bulk of the data. Drag coefficients and Lift coefficients were being successfully estimated by regression models with R2 values as high as 0.98.

  10. Estimation of reference evapotranspiration using multivariate fractional polynomial, Bayesian regression, and robust regression models in three arid environments

    Science.gov (United States)

    Khoshravesh, Mojtaba; Sefidkouhi, Mohammad Ali Gholami; Valipour, Mohammad

    2017-07-01

    The proper evaluation of evapotranspiration is essential in food security investigation, farm management, pollution detection, irrigation scheduling, nutrient flows, carbon balance as well as hydrologic modeling, especially in arid environments. To achieve sustainable development and to ensure water supply, especially in arid environments, irrigation experts need tools to estimate reference evapotranspiration on a large scale. In this study, the monthly reference evapotranspiration was estimated by three different regression models including the multivariate fractional polynomial (MFP), robust regression, and Bayesian regression in Ardestan, Esfahan, and Kashan. The results were compared with Food and Agriculture Organization (FAO)-Penman-Monteith (FAO-PM) to select the best model. The results show that at a monthly scale, all models provided a closer agreement with the calculated values for FAO-PM ( R 2 > 0.95 and RMSE < 12.07 mm month-1). However, the MFP model gives better estimates than the other two models for estimating reference evapotranspiration at all stations.

  11. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    Science.gov (United States)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  12. Global Land Use Regression Model for Nitrogen Dioxide Air Pollution.

    Science.gov (United States)

    Larkin, Andrew; Geddes, Jeffrey A; Martin, Randall V; Xiao, Qingyang; Liu, Yang; Marshall, Julian D; Brauer, Michael; Hystad, Perry

    2017-06-20

    Nitrogen dioxide is a common air pollutant with growing evidence of health impacts independent of other common pollutants such as ozone and particulate matter. However, the worldwide distribution of NO 2 exposure and associated impacts on health is still largely uncertain. To advance global exposure estimates we created a global nitrogen dioxide (NO 2 ) land use regression model for 2011 using annual measurements from 5,220 air monitors in 58 countries. The model captured 54% of global NO 2 variation, with a mean absolute error of 3.7 ppb. Regional performance varied from R 2 = 0.42 (Africa) to 0.67 (South America). Repeated 10% cross-validation using bootstrap sampling (n = 10,000) demonstrated a robust performance with respect to air monitor sampling in North America, Europe, and Asia (adjusted R 2 within 2%) but not for Africa and Oceania (adjusted R 2 within 11%) where NO 2 monitoring data are sparse. The final model included 10 variables that captured both between and within-city spatial gradients in NO 2 concentrations. Variable contributions differed between continental regions, but major roads within 100 m and satellite-derived NO 2 were consistently the strongest predictors. The resulting model can be used for global risk assessments and health studies, particularly in countries without existing NO 2 monitoring data or models.

  13. Generalized constraint neural network regression model subject to linear priors.

    Science.gov (United States)

    Qu, Ya-Jun; Hu, Bao-Gang

    2011-12-01

    This paper is reports an extension of our previous investigations on adding transparency to neural networks. We focus on a class of linear priors (LPs), such as symmetry, ranking list, boundary, monotonicity, etc., which represent either linear-equality or linear-inequality priors. A generalized constraint neural network-LPs (GCNN-LPs) model is studied. Unlike other existing modeling approaches, the GCNN-LP model exhibits its advantages. First, any LP is embedded by an explicitly structural mode, which may add a higher degree of transparency than using a pure algorithm mode. Second, a direct elimination and least squares approach is adopted to study the model, which produces better performances in both accuracy and computational cost over the Lagrange multiplier techniques in experiments. Specific attention is paid to both "hard (strictly satisfied)" and "soft (weakly satisfied)" constraints for regression problems. Numerical investigations are made on synthetic examples as well as on the real-world datasets. Simulation results demonstrate the effectiveness of the proposed modeling approach in comparison with other existing approaches.

  14. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Collision prediction models using multivariate Poisson-lognormal regression.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.

  16. A Gompertz regression model for fern spores germination

    Directory of Open Access Journals (Sweden)

    Gabriel y Galán, Jose María

    2015-06-01

    Full Text Available Germination is one of the most important biological processes for both seed and spore plants, also for fungi. At present, mathematical models of germination have been developed in fungi, bryophytes and several plant species. However, ferns are the only group whose germination has never been modelled. In this work we develop a regression model of the germination of fern spores. We have found that for Blechnum serrulatum, Blechnum yungense, Cheilanthes pilosa, Niphidium macbridei and Polypodium feuillei species the Gompertz growth model describe satisfactorily cumulative germination. An important result is that regression parameters are independent of fern species and the model is not affected by intraspecific variation. Our results show that the Gompertz curve represents a general germination model for all the non-green spore leptosporangiate ferns, including in the paper a discussion about the physiological and ecological meaning of the model.La germinación es uno de los procesos biológicos más relevantes tanto para las plantas con esporas, como para las plantas con semillas y los hongos. Hasta el momento, se han desarrollado modelos de germinación para hongos, briofitos y diversas especies de espermatófitos. Los helechos son el único grupo de plantas cuya germinación nunca ha sido modelizada. En este trabajo se desarrolla un modelo de regresión para explicar la germinación de las esporas de helechos. Observamos que para las especies Blechnum serrulatum, Blechnum yungense, Cheilanthes pilosa, Niphidium macbridei y Polypodium feuillei el modelo de crecimiento de Gompertz describe satisfactoriamente la germinación acumulativa. Un importante resultado es que los parámetros de la regresión son independientes de la especie y que el modelo no está afectado por variación intraespecífica. Por lo tanto, los resultados del trabajo muestran que la curva de Gompertz puede representar un modelo general para todos los helechos leptosporangiados

  17. PEMODELAN B-SPLINE DAN MARS PADA NILAI UJIAN MASUK TERHADAP IPK MAHASISWA JURUSAN DISAIN KOMUNIKASI VISUAL UK. PETRA SURABAYA

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2006-01-01

    Full Text Available Regression analysis is constructed for capturing the influences of independent variables to dependent ones. It can be done by looking at the relationship between those variables. This task of approximating the mean function can be done essentially in two ways. The quiet often use parametric approach is to assume that the mean curve has some prespecified functional forms. Alternatively, nonparametric approach, .i.e., without reference to a specific form, is used when there is no information of the regression function form (Haerdle, 1990. Therefore nonparametric approach has more flexibilities than the parametric one. The aim of this research is to find the best fit model that captures relationship between admission test score to the GPA. This particular data was taken from the Department of Design Communication and Visual, Petra Christian University, Surabaya for year 1999. Those two approaches were used here. In the parametric approach, we use simple linear, quadric cubic regression, and in the nonparametric ones, we use B-Spline and Multivariate Adaptive Regression Splines (MARS. Overall, the best model was chosen based on the maximum determinant coefficient. However, for MARS, the best model was chosen based on the GCV, minimum MSE, maximum determinant coefficient. Abstract in Bahasa Indonesia : Analisa regresi digunakan untuk melihat pengaruh variabel independen terhadap variabel dependent dengan terlebih dulu melihat pola hubungan variabel tersebut. Hal ini dapat dilakukan dengan melalui dua pendekatan. Pendekatan yang paling umum dan seringkali digunakan adalah pendekatan parametrik. Pendekatan parametrik mengasumsikan bentuk model sudah ditentukan. Apabila tidak ada informasi apapun tentang bentuk dari fungsi regresi, maka pendekatan yang digunakan adalah pendekatan nonparametrik. (Haerdle, 1990. Karena pendekatan tidak tergantung pada asumsi bentuk kurva tertentu, sehingga memberikan fleksibelitas yang lebih besar. Tujuan penelitian ini

  18. THE REGRESSION MODEL OF IRAN LIBRARIES ORGANIZATIONAL CLIMATE.

    Science.gov (United States)

    Jahani, Mohammad Ali; Yaminfirooz, Mousa; Siamian, Hasan

    2015-10-01

    The purpose of this study was to drawing a regression model of organizational climate of central libraries of Iran's universities. This study is an applied research. The statistical population of this study consisted of 96 employees of the central libraries of Iran's public universities selected among the 117 universities affiliated to the Ministry of Health by Stratified Sampling method (510 people). Climate Qual localized questionnaire was used as research tools. For predicting the organizational climate pattern of the libraries is used from the multivariate linear regression and track diagram. of the 9 variables affecting organizational climate, 5 variables of innovation, teamwork, customer service, psychological safety and deep diversity play a major role in prediction of the organizational climate of Iran's libraries. The results also indicate that each of these variables with different coefficient have the power to predict organizational climate but the climate score of psychological safety (0.94) plays a very crucial role in predicting the organizational climate. Track diagram showed that five variables of teamwork, customer service, psychological safety, deep diversity and innovation directly effects on the organizational climate variable that contribution of the team work from this influence is more than any other variables. Of the indicator of the organizational climate of climateQual, the contribution of the team work from this influence is more than any other variables that reinforcement of teamwork in academic libraries can be more effective in improving the organizational climate of this type libraries.

  19. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  20. Characteristics and Properties of a Simple Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.

  1. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  2. Bayesian Regression of Thermodynamic Models of Redox Active Materials

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, Katherine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Finding a suitable functional redox material is a critical challenge to achieving scalable, economically viable technologies for storing concentrated solar energy in the form of a defected oxide. Demonstrating e ectiveness for thermal storage or solar fuel is largely accomplished by using a thermodynamic model derived from experimental data. The purpose of this project is to test the accuracy of our regression model on representative data sets. Determining the accuracy of the model includes parameter tting the model to the data, comparing the model using di erent numbers of param- eters, and analyzing the entropy and enthalpy calculated from the model. Three data sets were considered in this project: two demonstrating materials for solar fuels by wa- ter splitting and the other of a material for thermal storage. Using Bayesian Inference and Markov Chain Monte Carlo (MCMC), parameter estimation was preformed on the three data sets. Good results were achieved, except some there was some deviations on the edges of the data input ranges. The evidence values were then calculated in a variety of ways and used to compare models with di erent number of parameters. It was believed that at least one of the parameters was unnecessary and comparing evidence values demonstrated that the parameter was need on one data set and not signi cantly helpful on another. The entropy was calculated by taking the derivative in one variable and integrating over another. and its uncertainty was also calculated by evaluating the entropy over multiple MCMC samples. Afterwards, all the parts were written up as a tutorial for the Uncertainty Quanti cation Toolkit (UQTk).

  3. Computing options for multiple-trait test-day random regression models while accounting for heat tolerance.

    Science.gov (United States)

    Aguilar, I; Tsuruta, S; Misztal, I

    2010-06-01

    Data included 90,242,799 test day records from first, second and third parities of 5,402,484 Holstein cows and 9,326,754 animals in the pedigree. Additionally, daily temperature humidity indexes (THI) from 202 weather stations were available. The fixed effects included herd test day, age at calving, milking frequency and days in milk classes (DIM). Random effects were additive genetic, permanent environment and herd-year and were fit as random regressions. Covariates included linear splines with four knots at 5, 50, 200 and 305 DIM and a function of THI. Mixed model equations were solved using an iteration on data program with a preconditioned conjugate gradient algorithm. Preconditioners used were diagonal (D), block diagonal due to traits (BT) and block diagonal due to traits and correlated effects (BTCORR). One run included BT with a 'diagonalized' model in which the random effects were reparameterized for diagonal (co)variance matrices among traits (BTDIAG). Memory requirements were 8.7 Gb for D, 10.4 Gb for BT and BTDIAG, and 24.3 Gb for BTCORR. Computing times (rounds) were 14 days (952) for D, 10.7 days (706) for BT, 7.7 days (494) for BTDIAG and 4.6 days (289) for BTCORR. The convergence pattern was strongly influenced by the choice of fixed effects. When sufficient memory is available, the option BTCORR is the fastest and simplest to implement; the next efficient method, BTDIAG, requires additional steps for diagonalization and back-diagonalization.

  4. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  5. Convergence diagnostics for Eigenvalue problems with linear regression model

    International Nuclear Information System (INIS)

    Shi, Bo; Petrovic, Bojan

    2011-01-01

    Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)

  6. The R Package threg to Implement Threshold Regression Models

    Directory of Open Access Journals (Sweden)

    Tao Xiao

    2015-08-01

    This new package includes four functions: threg, and the methods hr, predict and plot for threg objects returned by threg. The threg function is the model-fitting function which is used to calculate regression coefficient estimates, asymptotic standard errors and p values. The hr method for threg objects is the hazard-ratio calculation function which provides the estimates of hazard ratios at selected time points for specified scenarios (based on given categories or value settings of covariates. The predict method for threg objects is used for prediction. And the plot method for threg objects provides plots for curves of estimated hazard functions, survival functions and probability density functions of the first-hitting-time; function curves corresponding to different scenarios can be overlaid in the same plot for comparison to give additional research insights.

  7. Weighted cubic and biharmonic splines

    Science.gov (United States)

    Kvasov, Boris; Kim, Tae-Wan

    2017-01-01

    In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.

  8. Multiple linear regression and regression with time series error models in forecasting PM10 concentrations in Peninsular Malaysia.

    Science.gov (United States)

    Ng, Kar Yong; Awang, Norhashidah

    2018-01-06

    Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst.

  9. 3D reconstruction of coronary arteries from 2D angiographic projections using non-uniform rational basis splines (NURBS for accurate modelling of coronary stenoses.

    Directory of Open Access Journals (Sweden)

    Francesca Galassi

    Full Text Available Assessment of coronary stenosis severity is crucial in clinical practice. This study proposes a novel method to generate 3D models of stenotic coronary arteries, directly from 2D coronary images, and suitable for immediate assessment of the stenosis severity.From multiple 2D X-ray coronary arteriogram projections, 2D vessels were extracted. A 3D centreline was reconstructed as intersection of surfaces from corresponding branches. Next, 3D luminal contours were generated in a two-step process: first, a Non-Uniform Rational B-Spline (NURBS circular contour was designed and, second, its control points were adjusted to interpolate computed 3D boundary points. Finally, a 3D surface was generated as an interpolation across the control points of the contours and used in the analysis of the severity of a lesion. To evaluate the method, we compared 3D reconstructed lesions with Optical Coherence Tomography (OCT, an invasive imaging modality that enables high-resolution endoluminal visualization of lesion anatomy.Validation was performed on routine clinical data. Analysis of paired cross-sectional area discrepancies indicated that the proposed method more closely represented OCT contours than conventional approaches in luminal surface reconstruction, with overall root-mean-square errors ranging from 0.213mm2 to 1.013mm2, and maximum error of 1.837mm2. Comparison of volume reduction due to a lesion with corresponding FFR measurement suggests that the method may help in estimating the physiological significance of a lesion.The algorithm accurately reconstructed 3D models of lesioned arteries and enabled quantitative assessment of stenoses. The proposed method has the potential to allow immediate analysis of the stenoses in clinical practice, thereby providing incremental diagnostic and prognostic information to guide treatments in real time and without the need for invasive techniques.

  10. Symmetric, discrete fractional splines and Gabor systems

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2006-01-01

    In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....

  11. Ultracentrifuge separative power modeling with multivariate regression using covariance matrix

    International Nuclear Information System (INIS)

    Migliavacca, Elder

    2004-01-01

    In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 460 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process variables, which significantly influence the δU values are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow rate F, cut θ and product line pressure P p . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heteroscedasticity with any explained regression model variable. The surface curves are made relating the separative power with the control variables F, θ and P p to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)

  12. A Bayesian semiparametric Markov regression model for juvenile dermatomyositis.

    Science.gov (United States)

    De Iorio, Maria; Gallot, Natacha; Valcarcel, Beatriz; Wedderburn, Lucy

    2018-02-20

    Juvenile dermatomyositis (JDM) is a rare autoimmune disease that may lead to serious complications, even to death. We develop a 2-state Markov regression model in a Bayesian framework to characterise disease progression in JDM over time and gain a better understanding of the factors influencing disease risk. The transition probabilities between disease and remission state (and vice versa) are a function of time-homogeneous and time-varying covariates. These latter types of covariates are introduced in the model through a latent health state function, which describes patient-specific health over time and accounts for variability among patients. We assume a nonparametric prior based on the Dirichlet process to model the health state function and the baseline transition intensities between disease and remission state and vice versa. The Dirichlet process induces a clustering of the patients in homogeneous risk groups. To highlight clinical variables that most affect the transition probabilities, we perform variable selection using spike and slab prior distributions. Posterior inference is performed through Markov chain Monte Carlo methods. Data were made available from the UK JDM Cohort and Biomarker Study and Repository, hosted at the UCL Institute of Child Health. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Science.gov (United States)

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  14. Numerical Methods Using B-Splines

    Science.gov (United States)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  15. Isogeometric analysis using T-splines

    KAUST Repository

    Bazilevs, Yuri

    2010-01-01

    We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.

  16. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  17. Cubic spline functions for curve fitting

    Science.gov (United States)

    Young, J. D.

    1972-01-01

    FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.

  18. An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions

    Energy Technology Data Exchange (ETDEWEB)

    Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.

    2016-07-01

    Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)

  19. Density Deconvolution With EPI Splines

    Science.gov (United States)

    2015-09-01

    Comparison of Deconvolution Methods . . . . . . . . . . . . . . . 28 5 High-Fidelity and Low-Fidelity Simulation Output 31 5.1 Hydrofoil Concept...46 A.3 Hydrofoil Concept . . . . . . . . . . . . . . . . . . . . . . . . 47 A.4 Notes on Computation Time...Epi-Spline Estimates . . . . . . . . . . . 28 Figure 4.3 Deconvolution Method Comparison . . . . . . . . . . . . . . . . 29 Figure 5.1 Hydrofoil

  20. Segmented relationships to model erosion of regression effect in Cox regression.

    Science.gov (United States)

    Muggeo, Vito M R; Attanasio, Massimo

    2011-08-01

    In this article we propose a parsimonious parameterisation to model the so-called erosion of the covariate effect in the Cox model, namely a covariate effect approaching to zero as the follow-up time increases. The proposed parameterisation is based on the segmented relationship where proper constraints are set to accomodate for the erosion. Relevant hypothesis testing is discussed. The approach is illustrated on two historical datasets in the survival analysis literature, and some simulation studies are presented to show how the proposed framework leads to a test for a global effect with good power as compared with alternative procedures. Finally, possible generalisations are also presented for future research.

  1. Color Image Segmentation Using Fuzzy C-Regression Model

    Directory of Open Access Journals (Sweden)

    Min Chen

    2017-01-01

    Full Text Available Image segmentation is one important process in image analysis and computer vision and is a valuable tool that can be applied in fields of image processing, health care, remote sensing, and traffic image detection. Given the lack of prior knowledge of the ground truth, unsupervised learning techniques like clustering have been largely adopted. Fuzzy clustering has been widely studied and successfully applied in image segmentation. In situations such as limited spatial resolution, poor contrast, overlapping intensities, and noise and intensity inhomogeneities, fuzzy clustering can retain much more information than the hard clustering technique. Most fuzzy clustering algorithms have originated from fuzzy c-means (FCM and have been successfully applied in image segmentation. However, the cluster prototype of the FCM method is hyperspherical or hyperellipsoidal. FCM may not provide the accurate partition in situations where data consists of arbitrary shapes. Therefore, a Fuzzy C-Regression Model (FCRM using spatial information has been proposed whose prototype is hyperplaned and can be either linear or nonlinear allowing for better cluster partitioning. Thus, this paper implements FCRM and applies the algorithm to color segmentation using Berkeley’s segmentation database. The results show that FCRM obtains more accurate results compared to other fuzzy clustering algorithms.

  2. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Suleman, M.; Maqbool, F.; Malik, A.H.; Bhatti, Z.A.

    2012-01-01

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  3. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  4. Formation of Reflecting Surfaces Based on Spline Methods

    Science.gov (United States)

    Zamyatin, A. V.; Zamyatina, E. A.

    2017-11-01

    The article deals with problem of reflecting barriers surfaces generation by spline methods. The cases of reflection when a geometric model is applied are considered. The surfaces of reflecting barriers are formed in such a way that they contain given points and the rays reflected at these points and hit at the defined points of specified surface. The reflecting barrier surface is formed by cubic splines. It enables a comparatively simple implementation of proposed algorithms in the form of software applications. The algorithms developed in the article can be applied in architecture and construction design for reflecting surface generation in optics and acoustics providing the geometrical model of reflex processes is used correctly.

  5. MODELING SNAKE MICROHABITAT FROM RADIOTELEMETRY STUDIES USING POLYTOMOUS LOGISTIC REGRESSION

    Science.gov (United States)

    Multivariate analysis of snake microhabitat has historically used techniques that were derived under assumptions of normality and common covariance structure (e.g., discriminant function analysis, MANOVA). In this study, polytomous logistic regression (PLR which does not require ...

  6. Direct modeling of regression effects for transition probabilities in the progressive illness-death model

    DEFF Research Database (Denmark)

    Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo

    2017-01-01

    In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness–death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score...

  7. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  8. A comparison of least squares regression and geographically weighted regression modeling of West Nile virus risk based on environmental parameters.

    Science.gov (United States)

    Kala, Abhishek K; Tiwari, Chetan; Mikler, Armin R; Atkinson, Samuel F

    2017-01-01

    The primary aim of the study reported here was to determine the effectiveness of utilizing local spatial variations in environmental data to uncover the statistical relationships between West Nile Virus (WNV) risk and environmental factors. Because least squares regression methods do not account for spatial autocorrelation and non-stationarity of the type of spatial data analyzed for studies that explore the relationship between WNV and environmental determinants, we hypothesized that a geographically weighted regression model would help us better understand how environmental factors are related to WNV risk patterns without the confounding effects of spatial non-stationarity. We examined commonly mapped environmental factors using both ordinary least squares regression (LSR) and geographically weighted regression (GWR). Both types of models were applied to examine the relationship between WNV-infected dead bird counts and various environmental factors for those locations. The goal was to determine which approach yielded a better predictive model. LSR efforts lead to identifying three environmental variables that were statistically significantly related to WNV infected dead birds (adjusted R 2  = 0.61): stream density, road density, and land surface temperature. GWR efforts increased the explanatory value of these three environmental variables with better spatial precision (adjusted R 2  = 0.71). The spatial granularity resulting from the geographically weighted approach provides a better understanding of how environmental spatial heterogeneity is related to WNV risk as implied by WNV infected dead birds, which should allow improved planning of public health management strategies.

  9. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression

    Directory of Open Access Journals (Sweden)

    Chao Song

    2017-04-01

    Full Text Available An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR and geographically and temporally weighted regression (GTWR, which integrates spatial and temporal effects and global linear regression models (LM for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  10. A logistic regression model for Ghana National Health Insurance claims

    Directory of Open Access Journals (Sweden)

    Samuel Antwi

    2013-07-01

    Full Text Available In August 2003, the Ghanaian Government made history by implementing the first National Health Insurance System (NHIS in Sub-Saharan Africa. Within three years, over half of the country’s population had voluntarily enrolled into the National Health Insurance Scheme. This study had three objectives: 1 To estimate the risk factors that influences the Ghana national health insurance claims. 2 To estimate the magnitude of each of the risk factors in relation to the Ghana national health insurance claims. In this work, data was collected from the policyholders of the Ghana National Health Insurance Scheme with the help of the National Health Insurance database and the patients’ attendance register of the Koforidua Regional Hospital, from 1st January to 31st December 2011. Quantitative analysis was done using the generalized linear regression (GLR models. The results indicate that risk factors such as sex, age, marital status, distance and length of stay at the hospital were important predictors of health insurance claims. However, it was found that the risk factors; health status, billed charges and income level are not good predictors of national health insurance claim. The outcome of the study shows that sex, age, marital status, distance and length of stay at the hospital are statistically significant in the determination of the Ghana National health insurance premiums since they considerably influence claims. We recommended, among other things that, the National Health Insurance Authority should facilitate the institutionalization of the collection of appropriate data on a continuous basis to help in the determination of future premiums.

  11. Spline and spline wavelet methods with applications to signal and image processing

    CERN Document Server

    Averbuch, Amir Z; Zheludev, Valery A

    This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...

  12. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  13. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    Science.gov (United States)

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  14. Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its...

  15. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  16. Differential constraints for bounded recursive identification with multivariate splines

    NARCIS (Netherlands)

    De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2011-01-01

    The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able

  17. Adaptive regression modeling of biomarkers of potential harm in a population of U.S. adult cigarette smokers and nonsmokers

    Directory of Open Access Journals (Sweden)

    Mendes Paul E

    2010-03-01

    Full Text Available Abstract Background This article describes the data mining analysis of a clinical exposure study of 3585 adult smokers and 1077 nonsmokers. The analysis focused on developing models for four biomarkers of potential harm (BOPH: white blood cell count (WBC, 24 h urine 8-epi-prostaglandin F2α (EPI8, 24 h urine 11-dehydro-thromboxane B2 (DEH11, and high-density lipoprotein cholesterol (HDL. Methods Random Forest was used for initial variable selection and Multivariate Adaptive Regression Spline was used for developing the final statistical models Results The analysis resulted in the generation of models that predict each of the BOPH as function of selected variables from the smokers and nonsmokers. The statistically significant variables in the models were: platelet count, hemoglobin, C-reactive protein, triglycerides, race and biomarkers of exposure to cigarette smoke for WBC (R-squared = 0.29; creatinine clearance, liver enzymes, weight, vitamin use and biomarkers of exposure for EPI8 (R-squared = 0.41; creatinine clearance, urine creatinine excretion, liver enzymes, use of Non-steroidal antiinflammatory drugs, vitamins and biomarkers of exposure for DEH11 (R-squared = 0.29; and triglycerides, weight, age, sex, alcohol consumption and biomarkers of exposure for HDL (R-squared = 0.39. Conclusions Levels of WBC, EPI8, DEH11 and HDL were statistically associated with biomarkers of exposure to cigarette smoking and demographics and life style factors. All of the predictors togather explain 29%-41% of the variability in the BOPH.

  18. Comparison of Bayesian regression models and partial least squares regression for the development of infrared prediction equations.

    Science.gov (United States)

    Bonfatti, V; Tiezzi, F; Miglior, F; Carnier, P

    2017-09-01

    The objective of this study was to compare the prediction accuracy of 92 infrared prediction equations obtained by different statistical approaches. The predicted traits included fatty acid composition (n = 1,040); detailed protein composition (n = 1,137); lactoferrin (n = 558); pH and coagulation properties (n = 1,296); curd yield and composition obtained by a micro-cheese making procedure (n = 1,177); and Ca, P, Mg, and K contents (n = 689). The statistical methods used to develop the prediction equations were partial least squares regression (PLSR), Bayesian ridge regression, Bayes A, Bayes B, Bayes C, and Bayesian least absolute shrinkage and selection operator. Model performances were assessed, for each trait and model, in training and validation sets over 10 replicates. In validation sets, Bayesian regression models performed significantly better than PLSR for the prediction of 33 out of 92 traits, especially fatty acids, whereas they yielded a significantly lower prediction accuracy than PLSR in the prediction of 8 traits: the percentage of C18:1n-7 trans-9 in fat; the content of unglycosylated κ-casein and its percentage in protein; the content of α-lactalbumin; the percentage of α S2 -casein in protein; and the contents of Ca, P, and Mg. Even though Bayesian methods produced a significant enhancement of model accuracy in many traits compared with PLSR, most variations in the coefficient of determination in validation sets were smaller than 1 percentage point. Over traits, the highest predictive ability was obtained by Bayes C even though most of the significant differences in accuracy between Bayesian regression models were negligible. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Activation force splines

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten Pol; Erleben, Kenny

    We present a method for simulating the active contraction of deformable models, usable for interactive animation of soft deformable objects. We present a novel physical principle as the governing equation for the coupling between the low dimensional 1D activation force model and the higher...

  20. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  1. Parametric vs. Nonparametric Regression Modelling within Clinical Decision Support

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Zvárová, Jana

    2017-01-01

    Roč. 5, č. 1 (2017), s. 21-27 ISSN 1805-8698 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : decision support systems * decision rules * statistical analysis * nonparametric regression Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability

  2. Bilinear regression model with Kronecker and linear structures for ...

    African Journals Online (AJOL)

    On the basis of n independent observations from a matrix normal distribution, estimating equations in a flip-flop relation are established and the consistency of estimators is studied. Keywords: Bilinear regression; Estimating equations; Flip- flop algorithm; Kronecker product structure; Linear structured covariance matrix; ...

  3. Covariance Functions and Random Regression Models in the ...

    African Journals Online (AJOL)

    ARC-IRENE

    CFs were on age of the cow expressed in months (AM) using quadratic (order three) regressions based on orthogonal (Legendre) polynomials, initially proposed by Kirkpatrick & Heckman (1989). The matrices of coefficients KG and KC (corresponding to the additive genetic and permanent environmental functions, G.

  4. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    2017-09-03

    Sep 3, 2017 ... SPSS-21. Binary logistic regression with complex sam- pling design was fitted for the unmet need outcomes. Married women are disaggregated by various background characteristics to have an insight of their characteristics. All background characteristics of women used in this study were categorical ...

  5. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  6. Prediction models for CO2 emission in Malaysia using best subsets regression and multi-linear regression

    Science.gov (United States)

    Tan, C. H.; Matjafri, M. Z.; Lim, H. S.

    2015-10-01

    This paper presents the prediction models which analyze and compute the CO2 emission in Malaysia. Each prediction model for CO2 emission will be analyzed based on three main groups which is transportation, electricity and heat production as well as residential buildings and commercial and public services. The prediction models were generated using data obtained from World Bank Open Data. Best subset method will be used to remove irrelevant data and followed by multi linear regression to produce the prediction models. From the results, high R-square (prediction) value was obtained and this implies that the models are reliable to predict the CO2 emission by using specific data. In addition, the CO2 emissions from these three groups are forecasted using trend analysis plots for observation purpose.

  7. Semiparametric nonlinear quantile regression model for financial returns

    Czech Academy of Sciences Publication Activity Database

    Avdulaj, Krenar; Baruník, Jozef

    2017-01-01

    Roč. 21, č. 1 (2017), s. 81-97 ISSN 1081-1826 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : copula quantile regression * realized volatility * value-at-risk Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 0.649, year: 2016 http://library.utia.cas.cz/separaty/2017/E/avdulaj-0472346.pdf

  8. Quadrotor system identification using the multivariate multiplex b-spline

    NARCIS (Netherlands)

    Visser, T.; De Visser, C.C.; Van Kampen, E.J.

    2015-01-01

    A novel method for aircraft system identification is presented that is based on a new multivariate spline type; the multivariate multiplex B-spline. The multivariate multiplex B-spline is a generalization of the recently introduced tensor-simplex B-spline. Multivariate multiplex splines obtain

  9. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland.

    Science.gov (United States)

    Bonellie, Sandra R

    2012-10-01

    To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd.

  10. Construction of local integro quintic splines

    Directory of Open Access Journals (Sweden)

    T. Zhanlav

    2016-06-01

    Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.

  11. Model checks for Cox-type regression models based on optimally weighted martingale residuals.

    Science.gov (United States)

    Gandy, Axel; Jensen, Uwe

    2009-12-01

    We introduce directed goodness-of-fit tests for Cox-type regression models in survival analysis. "Directed" means that one may choose against which alternatives the tests are particularly powerful. The tests are based on sums of weighted martingale residuals and their asymptotic distributions.We derive optimal tests against certain competing models which include Cox-type regression models with different covariates and/or a different link function. We report results from several simulation studies and apply our test to a real dataset.

  12. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  13. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  14. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    Science.gov (United States)

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.

  15. Epsilon-insensitive fuzzy c-regression models: introduction to epsilon-insensitive fuzzy modeling.

    Science.gov (United States)

    Leski, Jacek M

    2004-02-01

    This paper introduces a new epsilon-insensitive fuzzy c-regression models (epsilonFCRM), that can be used in fuzzy modeling. To fit these regression models to real data, a weighted epsilon-insensitive loss function is used. The proposed method make it possible to exclude an intrinsic inconsistency of fuzzy modeling, where crisp loss function (usually quadratic) is used to match real data and the fuzzy model. The epsilon-insensitive fuzzy modeling is based on human thinking and learning. This method allows easy control of generalization ability and outliers robustness. This approach leads to c simultaneous quadratic programming problems with bound constraints and one linear equality constraint. To solve this problem, computationally efficient numerical method, called incremental learning, is proposed. Finally, examples are given to demonstrate the validity of introduced approach to fuzzy modeling.

  16. Dynamic modeling of predictive uncertainty by regression on absolute errors

    NARCIS (Netherlands)

    Pianosi, F.; Raso, L.

    2012-01-01

    Uncertainty of hydrological forecasts represents valuable information for water managers and hydrologists. This explains the popularity of probabilistic models, which provide the entire distribution of the hydrological forecast. Nevertheless, many existing hydrological models are deterministic and

  17. A generalized exponential time series regression model for electricity prices

    DEFF Research Database (Denmark)

    Haldrup, Niels; Knapik, Oskar; Proietti, Tomasso

    on the estimated model, the best linear predictor is constructed. Our modeling approach provides good fit within sample and outperforms competing benchmark predictors in terms of forecasting accuracy. We also find that building separate models for each hour of the day and averaging the forecasts is a better...

  18. Spline methods for conversation equations

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs

  19. quadratic spline finite element method

    Directory of Open Access Journals (Sweden)

    A. R. Bahadir

    2002-01-01

    Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.

  20. Parental Vaccine Acceptance: A Logistic Regression Model Using Previsit Decisions.

    Science.gov (United States)

    Lee, Sara; Riley-Behringer, Maureen; Rose, Jeanmarie C; Meropol, Sharon B; Lazebnik, Rina

    2017-07-01

    This study explores how parents' intentions regarding vaccination prior to their children's visit were associated with actual vaccine acceptance. A convenience sample of parents accompanying 6-week-old to 17-year-old children completed a written survey at 2 pediatric practices. Using hierarchical logistic regression, for hospital-based participants (n = 216), vaccine refusal history ( P < .01) and vaccine decision made before the visit ( P < .05) explained 87% of vaccine refusals. In community-based participants (n = 100), vaccine refusal history ( P < .01) explained 81% of refusals. Over 1 in 5 parents changed their minds about vaccination during the visit. Thirty parents who were previous vaccine refusers accepted current vaccines, and 37 who had intended not to vaccinate choose vaccination. Twenty-nine parents without a refusal history declined vaccines, and 32 who did not intend to refuse before the visit declined vaccination. Future research should identify key factors to nudge parent decision making in favor of vaccination.

  1. Forecast Model of Urban Stagnant Water Based on Logistic Regression

    Directory of Open Access Journals (Sweden)

    Liu Pan

    2017-01-01

    Full Text Available With the development of information technology, the construction of water resource system has been gradually carried out. In the background of big data, the work of water information needs to carry out the process of quantitative to qualitative change. Analyzing the correlation of data and exploring the deep value of data which are the key of water information’s research. On the basis of the research on the water big data and the traditional data warehouse architecture, we try to find out the connection of different data source. According to the temporal and spatial correlation of stagnant water and rainfall, we use spatial interpolation to integrate data of stagnant water and rainfall which are from different data source and different sensors, then use logistic regression to find out the relationship between them.

  2. Additive Intensity Regression Models in Corporate Default Analysis

    DEFF Research Database (Denmark)

    Lando, David; Medhat, Mamdouh; Nielsen, Mads Stenbo

    2013-01-01

    We consider additive intensity (Aalen) models as an alternative to the multiplicative intensity (Cox) models for analyzing the default risk of a sample of rated, nonfinancial U.S. firms. The setting allows for estimating and testing the significance of time-varying effects. We use a variety...... of model checking techniques to identify misspecifications. In our final model, we find evidence of time-variation in the effects of distance-to-default and short-to-long term debt. Also we identify interactions between distance-to-default and other covariates, and the quick ratio covariate is significant....... None of our macroeconomic covariates are significant....

  3. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  4. Approximate Tests of Hypotheses in Regression Models with Grouped Data

    Science.gov (United States)

    1979-02-01

    in terms of Kolmogoroff -Smirnov statistic in the next section. I 1 1 I t A 4. Simulations Two models have been considered for simulations. Model I. Yuk...Fort Meade, MD 20755 2 Commanding Officer Navy LibraryrnhOffice o Naval Research National Space Technology LaboratoryBranch Office *Attn: Navy

  5. Effects of Employing Ridge Regression in Structural Equation Models.

    Science.gov (United States)

    McQuitty, Shaun

    1997-01-01

    LISREL 8 invokes a ridge option when maximum likelihood or generalized least squares are used to estimate a structural equation model with a nonpositive definite covariance or correlation matrix. Implications of the ridge option for model fit, parameter estimates, and standard errors are explored through two examples. (SLD)

  6. Univariate Cubic L1 Interpolating Splines: Spline Functional, Window Size and Analysis-based Algorithm

    Directory of Open Access Journals (Sweden)

    Shu-Cherng Fang

    2010-08-01

    Full Text Available We compare univariate L1 interpolating splines calculated on 5-point windows, on 7-point windows and on global data sets using four different spline functionals, namely, ones based on the second derivative, the first derivative, the function value and the antiderivative. Computational results indicate that second-derivative-based 5-point-window L1 splines preserve shape as well as or better than the other types of L1 splines. To calculate second-derivative-based 5-point-window L1 splines, we introduce an analysis-based, parallelizable algorithm. This algorithm is orders of magnitude faster than the previously widely used primal affine algorithm.

  7. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Science.gov (United States)

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  9. Logistic regression model for detecting radon prone areas in Ireland.

    Science.gov (United States)

    Elío, J; Crowley, Q; Scanlon, R; Hodgson, J; Long, S

    2017-12-01

    A new high spatial resolution radon risk map of Ireland has been developed, based on a combination of indoor radon measurements (n=31,910) and relevant geological information (i.e. Bedrock Geology, Quaternary Geology, soil permeability and aquifer type). Logistic regression was used to predict the probability of having an indoor radon concentration above the national reference level of 200Bqm -3 in Ireland. The four geological datasets evaluated were found to be statistically significant, and, based on combinations of these four variables, the predicted probabilities ranged from 0.57% to 75.5%. Results show that the Republic of Ireland may be divided in three main radon risk categories: High (HR), Medium (MR) and Low (LR). The probability of having an indoor radon concentration above 200Bqm -3 in each area was found to be 19%, 8% and 3%; respectively. In the Republic of Ireland, the population affected by radon concentrations above 200Bqm -3 is estimated at ca. 460k (about 10% of the total population). Of these, 57% (265k), 35% (160k) and 8% (35k) are in High, Medium and Low Risk Areas, respectively. Our results provide a high spatial resolution utility which permit customised radon-awareness information to be targeted at specific geographic areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Spline fitting for multi-set data

    International Nuclear Information System (INIS)

    Zhou Hongmo; Liu Renqiu; Liu Tingjin

    1987-01-01

    A spline fit method and program for multi-set data have been developed. Improvements have been made to have new functions: any order of spline as base, knot optimization and accurate calculation for error of fit value. The program has been used for practical evaluation of nuclear data

  11. Learning Supervised Topic Models for Classification and Regression from Crowds

    OpenAIRE

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete; Pereira, Francisco

    2017-01-01

    The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this...

  12. GIS-Based Analytical Tools for Transport Planning: Spatial Regression Models for Transportation Demand Forecast

    Directory of Open Access Journals (Sweden)

    Simone Becker Lopes

    2014-04-01

    Full Text Available Considering the importance of spatial issues in transport planning, the main objective of this study was to analyze the results obtained from different approaches of spatial regression models. In the case of spatial autocorrelation, spatial dependence patterns should be incorporated in the models, since that dependence may affect the predictive power of these models. The results obtained with the spatial regression models were also compared with the results of a multiple linear regression model that is typically used in trips generation estimations. The findings support the hypothesis that the inclusion of spatial effects in regression models is important, since the best results were obtained with alternative models (spatial regression models or the ones with spatial variables included. This was observed in a case study carried out in the city of Porto Alegre, in the state of Rio Grande do Sul, Brazil, in the stages of specification and calibration of the models, with two distinct datasets.

  13. Stochastic Approximation Methods for Latent Regression Item Response Models. Research Report. ETS RR-09-09

    Science.gov (United States)

    von Davier, Matthias; Sinharay, Sandip

    2009-01-01

    This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…

  14. A computational approach to compare regression modelling strategies in prediction research

    NARCIS (Netherlands)

    Pajouheshnia, R.; Pestman, W.R.; Teerenstra, S.; Groenwold, R.H.

    2016-01-01

    BACKGROUND: It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in

  15. Spatial Double Generalized Beta Regression Models: Extensions and Application to Study Quality of Education in Colombia

    Science.gov (United States)

    Cepeda-Cuervo, Edilberto; Núñez-Antón, Vicente

    2013-01-01

    In this article, a proposed Bayesian extension of the generalized beta spatial regression models is applied to the analysis of the quality of education in Colombia. We briefly revise the beta distribution and describe the joint modeling approach for the mean and dispersion parameters in the spatial regression models' setting. Finally, we motivate…

  16. Data to support "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations & Biological Condition"

    Data.gov (United States)

    U.S. Environmental Protection Agency — Spreadsheets are included here to support the manuscript "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations and Biological Condition". This...

  17. Cox's regression model for dynamics of grouped unemployment data

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2003-01-01

    Roč. 10, č. 19 (2003), s. 151-162 ISSN 1212-074X R&D Projects: GA ČR GA402/01/0539 Institutional research plan: CEZ:AV0Z1075907 Keywords : mathematical statistics * survival analysis * Cox's model Subject RIV: BB - Applied Statistics, Operational Research

  18. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    C.S. Bos (Charles); Ph.H.B.F. Franses (Philip Hans); M. Ooms (Marius)

    2001-01-01

    textabstractWe examine recursive out-of-sample forecasting of monthly postwar U.S. core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading

  19. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    Ooms, M.; Bos, C.S.; Franses, P.H.

    2003-01-01

    We examine recursive out-of-sample forecasting of monthly postwar US core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading indicators

  20. Multiple Linear Regression Model for Estimating the Price of a ...

    African Journals Online (AJOL)

    Ghana Mining Journal ... In the modeling, the Ordinary Least Squares (OLS) normality assumption which could introduce errors in the statistical analyses was dealt with by log transformation of the data, ensuring the data is normally ... The resultant MLRM is: Ŷi MLRM = (X'X)-1X'Y(xi') where X is the sample data matrix.

  1. Parametric modelling of thresholds across scales in wavelet regression

    OpenAIRE

    Anestis Antoniadis; Piotr Fryzlewicz

    2006-01-01

    We propose a parametric wavelet thresholding procedure for estimation in the ‘function plus independent, identically distributed Gaussian noise’ model. To reflect the decreasing sparsity of wavelet coefficients from finer to coarser scales, our thresholds also decrease. They retain the noise-free reconstruction property while being lower than the universal threshold, and are jointly parameterised by a single scalar parameter. We show that our estimator achieves near-optimal risk rates for the...

  2. Shaofu Zhuyu Decoction Regresses Endometriotic Lesions in a Rat Model

    Directory of Open Access Journals (Sweden)

    Guanghui Zhu

    2018-01-01

    Full Text Available The current therapies for endometriosis are restricted by various side effects and treatment outcome has been less than satisfactory. Shaofu Zhuyu Decoction (SZD, a classic traditional Chinese medicinal (TCM prescription for dysmenorrhea, has been widely used in clinical practice by TCM doctors to relieve symptoms of endometriosis. The present study aimed to investigate the effects of SZD on a rat model of endometriosis. Forty-eight female Sprague-Dawley rats with regular estrous cycles went through autotransplantation operation to establish endometriosis model. Then 38 rats with successful ectopic implants were randomized into two groups: vehicle- and SZD-treated groups. The latter were administered SZD through oral gavage for 4 weeks. By the end of the treatment period, the volume of the endometriotic lesions was measured, the histopathological properties of the ectopic endometrium were evaluated, and levels of proliferating cell nuclear antigen (PCNA, CD34, and hypoxia inducible factor- (HIF- 1α in the ectopic endometrium were detected with immunohistochemistry. Furthermore, apoptosis was assessed using the terminal deoxynucleotidyl transferase (TdT deoxyuridine 5′-triphosphate (dUTP nick-end labeling (TUNEL assay. In this study, SZD significantly reduced the size of ectopic lesions in rats with endometriosis, inhibited cell proliferation, increased cell apoptosis, and reduced microvessel density and HIF-1α expression. It suggested that SZD could be an effective therapy for the treatment and prevention of endometriosis recurrence.

  3. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  4. Higher-order numerical solutions using cubic splines. [for partial differential equations

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-

  5. Validation of regression models for nitrate concentrations in the upper groundwater in sandy soils

    NARCIS (Netherlands)

    Sonneveld, M.P.W.; Brus, D.J.; Roelsma, J.

    2010-01-01

    For Dutch sandy regions, linear regression models have been developed that predict nitrate concentrations in the upper groundwater on the basis of residual nitrate contents in the soil in autumn. The objective of our study was to validate these regression models for one particular sandy region

  6. Use of a Regression Model to Study Host-Genomic Determinants of Phage Susceptibility in MRSA

    DEFF Research Database (Denmark)

    Zschach, Henrike; Larsen, Mette Voldby; Hasman, Henrik

    2018-01-01

    strains to 12 (nine monovalent) different therapeutic phage preparations and subsequently employed linear regression models to estimate the influence of individual host gene families on resistance to phages. Specifically, we used a two-step regression model setup with a preselection step based on gene...

  7. Bayesian networks with a logistic regression model for the conditional probabilities

    NARCIS (Netherlands)

    Rijmen, F.P.J.

    2008-01-01

    Logistic regression techniques can be used to restrict the conditional probabilities of a Bayesian network for discrete variables. More specifically, each variable of the network can be modeled through a logistic regression model, in which the parents of the variable define the covariates. When all

  8. Technology diffusion in hospitals : A log odds random effects regression model

    NARCIS (Netherlands)

    Blank, J.L.T.; Valdmanis, V.G.

    2013-01-01

    This study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to describe the

  9. Technology diffusion in hospitals: A log odds random effects regression model

    NARCIS (Netherlands)

    J.L.T. Blank (Jos); V.G. Valdmanis (Vivian G.)

    2015-01-01

    textabstractThis study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to

  10. Clinical Predictors of Regression of Choroidal Melanomas after Brachytherapy: A Growth Curve Model.

    Science.gov (United States)

    Rashid, Mamunur; Heikkonen, Jorma; Singh, Arun D; Kivelä, Tero T

    2018-02-27

    To build multivariate models to assess correctly and efficiently the contribution of tumor characteristics on the rate of regression of choroidal melanomas after brachytherapy in a way that adjusts for confounding and takes into account variation in tumor regression patterns. Modeling of longitudinal observational data. Ultrasound images from 330 of 388 consecutive choroidal melanomas (87%) irradiated from 2000 through 2008 at the Helsinki University Hospital, Helsinki, Finland, a national referral center. Images were obtained with a 10-MHz B-scan during 3 years of follow-up. Change in tumor thickness and cross-sectional area were modeled using a polynomial growth-curve function in a nested mixed linear regression model considering regression pattern and tumor levels. Initial tumor dimensions, tumor-node-metastasis (TNM) stage, shape, ciliary body involvement, pigmentation, isotope, plaque size, detached muscles, and radiation parameters were considered as covariates. Covariates that independently predict tumor regression. Initial tumor thickness, largest basal diameter, ciliary body involvement, TNM stage, tumor shape group, break in Bruch's membrane, having muscles detached, and radiation dose to tumor base predicted faster regression, whether considering all tumors or those that regressed in a pattern compatible with exponential decay. Dark brown pigmentation was associated with slower regression. In multivariate modeling, initial tumor thickness remained the predominant and robust predictor of tumor regression (P future analyses efficiently without matching. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  11. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  12. Smoothing noisy spectroscopic data with many-knot spline method

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, M.H. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)], E-mail: peter_zu@163.com; Liu, L.G.; Qi, D.X.; You, Z.; Xu, A.A. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)

    2008-05-15

    In this paper, we present the development of a many-knot spline method derived to remove the statistical noise in the spectroscopic data. This method is an expansion of the B-spline method. Compared to the B-spline method, the many-knot spline method is significantly faster.

  13. Optimal Approximation of Biquartic Polynomials by Bicubic Splines

    Directory of Open Access Journals (Sweden)

    Kačala Viliam

    2018-01-01

    The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.

  14. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  15. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    Science.gov (United States)

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  16. Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model versus Logistic Regression. REL 2015-077

    Science.gov (United States)

    Koon, Sharon; Petscher, Yaacov

    2015-01-01

    The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by…

  17. Numerical treatment of Hunter Saxton equation using cubic trigonometric B-spline collocation method

    Science.gov (United States)

    Hashmi, M. S.; Awais, Muhammad; Waheed, Ammarah; Ali, Qutab

    2017-09-01

    In this article, authors proposed a computational model based on cubic trigonometric B-spline collocation method to solve Hunter Saxton equation. The nonlinear second order partial differential equation arises in modeling of nematic liquid crystals and describes some aspects of orientation wave. The problem is decomposed into system of linear equations using cubic trigonometric B-spline collocation method with quasilinearization. To show the efficiency of the proposed method, two numerical examples have been tested for different values of t. The results are described using error tables and graphs and compared with the results existed in literature. It is evident that results are in good agreement with analytical solution and better than Arbabi, Nazari, and Davishi, Optik 127, 5255-5258 (2016). In current problem, it is also observed that the cubic trigonometric B-spline gives better results as compared to cubic B-spline.

  18. Logistic regression models for polymorphic and antagonistic pleiotropic gene action on human aging and longevity

    DEFF Research Database (Denmark)

    Tan, Qihua; Bathum, L; Christiansen, L

    2003-01-01

    In this paper, we apply logistic regression models to measure genetic association with human survival for highly polymorphic and pleiotropic genes. By modelling genotype frequency as a function of age, we introduce a logistic regression model with polytomous responses to handle the polymorphic...... situation. Genotype and allele-based parameterization can be used to investigate the modes of gene action and to reduce the number of parameters, so that the power is increased while the amount of multiple testing minimized. A binomial logistic regression model with fractional polynomials is used to capture...

  19. Spline Truncated Multivariabel pada Permodelan Nilai Ujian Nasional di Kabupaten Lombok Barat

    Directory of Open Access Journals (Sweden)

    Nurul Fitriyani

    2017-12-01

    Full Text Available Regression model is used to analyze the relationship between dependent variable and independent variable. If the regression curve form is not known, then the regression curve estimation can be done by nonparametric regression approach. This study aimed to investigate the relationship between the value resulted by National Examination and the factors that influence it. The statistical analysis used was multivariable truncated spline, in order to analyze the relationship between variables. The research that has been done showed that the best model obtained by using three knot points. This model produced a minimum GCV value of 44.46 and the value of determination coefficient of 58.627%. The parameter test showed that all factors used were significantly influence the National Examination Score for Senior High School students in West Lombok Regency year 2017. The variables were as follows: National Examination Score of Junior High School; School or Madrasah Examination Score; the value of Student’s Report Card; Student’s House Distance to School; and Number of Student’s Siblings.

  20. Biomechanical Analysis with Cubic Spline Functions

    Science.gov (United States)

    McLaughlin, Thomas M.; And Others

    1977-01-01

    Results of experimentation suggest that the cubic spline is a convenient and consistent method for providing an accurate description of displacement-time data and for obtaining the corresponding time derivatives. (MJB)

  1. On convexity and Schoenberg's variation diminishing splines

    International Nuclear Information System (INIS)

    Feng, Yuyu; Kozak, J.

    1992-11-01

    In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs

  2. Spline Variational Theory for Composite Bolted Joints

    National Research Council Canada - National Science Library

    Iarve, E

    1997-01-01

    .... Two approaches were implemented. A conventional mesh overlay method in the crack region to satisfy the crack face boundary conditions and a novel spline basis partitioning method were compared...

  3. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    Science.gov (United States)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  4. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  5. Regression models for analyzing radiological visual grading studies--an empirical comparison.

    Science.gov (United States)

    Saffari, S Ehsan; Löve, Áskell; Fredrikson, Mats; Smedby, Örjan

    2015-10-30

    For optimizing and evaluating image quality in medical imaging, one can use visual grading experiments, where observers rate some aspect of image quality on an ordinal scale. To analyze the grading data, several regression methods are available, and this study aimed at empirically comparing such techniques, in particular when including random effects in the models, which is appropriate for observers and patients. Data were taken from a previous study where 6 observers graded or ranked in 40 patients the image quality of four imaging protocols, differing in radiation dose and image reconstruction method. The models tested included linear regression, the proportional odds model for ordinal logistic regression, the partial proportional odds model, the stereotype logistic regression model and rank-order logistic regression (for ranking data). In the first two models, random effects as well as fixed effects could be included; in the remaining three, only fixed effects. In general, the goodness of fit (AIC and McFadden's Pseudo R (2)) showed small differences between the models with fixed effects only. For the mixed-effects models, higher AIC and lower Pseudo R (2) was obtained, which may be related to the different number of parameters in these models. The estimated potential for dose reduction by new image reconstruction methods varied only slightly between models. The authors suggest that the most suitable approach may be to use ordinal logistic regression, which can handle ordinal data and random effects appropriately.

  6. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    Science.gov (United States)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  7. Uncertainty Quantification using Epi-Splines and Soft Information

    Science.gov (United States)

    2012-06-01

    prediction of the behavior of constructed models of phenomena in physics, 1 biology, chemistry, ecology, engineered sytems , politics, etc. ... Results...spline framework being applied to one of the most common, yet most complex, systems known – the human body . Chapter 5 concludes the thesis by...complex a system known to man than that of the human body . The number of variables im- pacting the performance of one human over another in a given

  8. Genome-wide selection by mixed model ridge regression and extensions based on geostatistical models.

    Science.gov (United States)

    Schulz-Streeck, Torben; Piepho, Hans-Peter

    2010-03-31

    The success of genome-wide selection (GS) approaches will depend crucially on the availability of efficient and easy-to-use computational tools. Therefore, approaches that can be implemented using mixed models hold particular promise and deserve detailed study. A particular class of mixed models suitable for GS is given by geostatistical mixed models, when genetic distance is treated analogously to spatial distance in geostatistics. We consider various spatial mixed models for use in GS. The analyses presented for the QTL-MAS 2009 dataset pay particular attention to the modelling of residual errors as well as of polygenetic effects. It is shown that geostatistical models are viable alternatives to ridge regression, one of the common approaches to GS. Correlations between genome-wide estimated breeding values and true breeding values were between 0.879 and 0.889. In the example considered, we did not find a large effect of the residual error variance modelling, largely because error variances were very small. A variance components model reflecting the pedigree of the crosses did not provide an improved fit. We conclude that geostatistical models deserve further study as a tool to GS that is easily implemented in a mixed model package.

  9. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.

    2010-01-01

    Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.

  10. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  11. A connection between item/subtest regression and the Rasch model

    NARCIS (Netherlands)

    Engelen, Ronald J.H.; Jannarone, Robert J.

    1989-01-01

    The purpose of this paper is to link empirical Bayes methods with two specific topics in item response theory--item/subtest regression, and testing the goodness of fit of the Rasch model--under the assumptions of local independence and sufficiency. It is shown that item/subtest regression results in

  12. Normalization Regression Estimation With Application to a Nonorthogonal, Nonrecursive Model of School Learning.

    Science.gov (United States)

    Bulcock, J. W.; And Others

    Advantages of normalization regression estimation over ridge regression estimation are demonstrated by reference to Bloom's model of school learning. Theoretical concern centered on the structure of scholastic achievement at grade 10 in Canadian high schools. Data on 886 students were randomly sampled from the Carnegie Human Resources Data Bank.…

  13. Auto-correlograms and auto-regressive models of trace metal distributions in Cochin backwaters

    Digital Repository Service at National Institute of Oceanography (India)

    Jayalakshmy, K.V.; Sankaranarayanan, V.N.

    ,2 and 3 and for Zn at stations 1 and 4. The stability in time for the concentration profiles increases as Fe Mn Ni Cu Co. The fraction of variability in the variables obtained by the auto-regressive model of order 1 ranges from 20 to 50%. Auto-regressive...

  14. An enhanced splined saddle method

    Science.gov (United States)

    Ghasemi, S. Alireza; Goedecker, Stefan

    2011-07-01

    We present modifications for the method recently developed by Granot and Baer [J. Chem. Phys. 128, 184111 (2008)], 10.1063/1.2916716. These modifications significantly enhance the efficiency and reliability of the method. In addition, we discuss some specific features of this method. These features provide important flexibilities which are crucial for a double-ended saddle point search method in order to be applicable to complex reaction mechanisms. Furthermore, it is discussed under what circumstances this methods might fail to find the transition state and remedies to avoid such situations are provided. We demonstrate the performance of the enhanced splined saddle method on several examples with increasing complexity, isomerization of ammonia, ethane and cyclopropane molecules, tautomerization of cytosine, the ring opening of cyclobutene, the Stone-Wales transformation of the C60 fullerene, and finally rolling a small NaCl cube on NaCl(001) surface. All of these calculations are based on density functional theory. The efficiency of the method is remarkable in regard to the reduction of the total computational time.

  15. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  16. Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2017-06-01

    Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The

  17. A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems

    Science.gov (United States)

    Rasekh, Ehsan

    2011-11-01

    Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.

  18. Poisson regression for modeling count and frequency outcomes in trauma research.

    Science.gov (United States)

    Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

    2008-10-01

    The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

  19. Modeling Tetanus Neonatorum case using the regression of negative binomial and zero-inflated negative binomial

    Science.gov (United States)

    Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni

    2017-12-01

    Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.

  20. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  1. SOME STATISTICAL ISSUES RELATED TO MULTIPLE LINEAR REGRESSION MODELING OF BEACH BACTERIA CONCENTRATIONS

    Science.gov (United States)

    As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is...

  2. Rank Set Sampling in Improving the Estimates of Simple Regression Model

    Directory of Open Access Journals (Sweden)

    M Iqbal Jeelani

    2015-04-01

    Full Text Available In this paper Rank set sampling (RSS is introduced with a view of increasing the efficiency of estimates of Simple regression model. Regression model is considered with respect to samples taken from sampling techniques like Simple random sampling (SRS, Systematic sampling (SYS and Rank set sampling (RSS. It is found that R2 and Adj R2 obtained from regression model based on Rank set sample is higher than rest of two sampling schemes. Similarly Root mean square error, p-values, coefficient of variation are much lower in Rank set based regression model, also under validation technique (Jackknifing there is consistency in the measure of R2, Adj R2 and RMSE in case of RSS as compared to SRS and SYS. Results are supported with an empirical study involving a real data set generated of Pinus Wallichiana taken from block Langate of district Kupwara. 

  3. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  4. Developing and testing a global-scale regression model to quantify mean annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  5. An appraisal of convergence failures in the application of logistic regression model in published manuscripts.

    Science.gov (United States)

    Yusuf, O B; Bamgboye, E A; Afolabi, R F; Shodimu, M A

    2014-09-01

    Logistic regression model is widely used in health research for description and predictive purposes. Unfortunately, most researchers are sometimes not aware that the underlying principles of the techniques have failed when the algorithm for maximum likelihood does not converge. Young researchers particularly postgraduate students may not know why separation problem whether quasi or complete occurs, how to identify it and how to fix it. This study was designed to critically evaluate convergence issues in articles that employed logistic regression analysis published in an African Journal of Medicine and medical sciences between 2004 and 2013. Problems of quasi or complete separation were described and were illustrated with the National Demographic and Health Survey dataset. A critical evaluation of articles that employed logistic regression was conducted. A total of 581 articles was reviewed, of which 40 (6.9%) used binary logistic regression. Twenty-four (60.0%) stated the use of logistic regression model in the methodology while none of the articles assessed model fit. Only 3 (12.5%) properly described the procedures. Of the 40 that used the logistic regression model, the problem of convergence occurred in 6 (15.0%) of the articles. Logistic regression tends to be poorly reported in studies published between 2004 and 2013. Our findings showed that the procedure may not be well understood by researchers since very few described the process in their reports and may be totally unaware of the problem of convergence or how to deal with it.

  6. Placing Spline Knots in Neural Networks Using Splines as Activation Functions

    Czech Academy of Sciences Publication Activity Database

    Hlaváčková, Kateřina; Verleysen, M.

    1997-01-01

    Roč. 17, 3/4 (1997), s. 159-166 ISSN 0925-2312 R&D Projects: GA ČR GA201/93/0427; GA ČR GA201/96/0971 Keywords : cubic -spline function * approximation error * knots of spline function * feedforward neural network Impact factor: 0.422, year: 1997

  7. Comparison of Efficiency in Generalized Poisson Regression Model and the Standard Poisson Regression Model in analyzing Fertility Behavior among Women, Kashan, 2012

    Directory of Open Access Journals (Sweden)

    Hossein Fallahzadeh

    2017-05-01

    Full Text Available Introduction: Different statistical methods can be used to analyze fertility data. When the response variable is discrete, Poisson model is applied. If the condition does not hold for the Poisson model, its generalized model will be applied. The goal of this study was to compare the efficiency of generalized Poisson regression model with the standard Poisson regression model in estimating the coefficient of effective factors onthe current number of children. Methods: This is a cross-sectional study carried out on a populationof married women within the age range of15-49 years in Kashan, Iran. The cluster sampling method was used for data collection. Clusters consisted ofthe urbanblocksdeterminedby the municipality.Atotal number of10clusters each containing30households was selected according to the health center's framework. The necessary data were then collected through a self-madequestionnaireanddirectinterviewswith women under study. Further, the data analysiswas performed by usingthe standard and generalizedPoisson regression models through theRsoftware. Results: The average number of children for each woman was 1.45 with a variance of 1.073.A significant relationship was observed between the husband's age, number of unwanted pregnancies, and the average durationof breastfeeding with the present number of children in the two standard and generalized Poisson regression models (p < 0.05.The mean ageof women  participating in thisstudy was33.1± 7.57 years (from 25.53 years to 40.67, themean age of marriage was 20.09 ± 3.82 (from16.27 years to23.91, and themean age of their husbands was 37.9 ± 8.4years (from 29.5 years to 46.3. In the current study, the majority of women werein the age range of 30-35years old with the medianof 32years, however, most ofmen were in the age range of 35-40yearswith the median of37years. While 236of women did not have unwanted pregnancies, most participants of the present study had one unwanted pregnancy

  8. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Lectures on constructive approximation Fourier, spline, and wavelet methods on the real line, the sphere, and the ball

    CERN Document Server

    Michel, Volker

    2013-01-01

    Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball focuses on spherical problems as they occur in the geosciences and medical imaging. It comprises the author’s lectures on classical approximation methods based on orthogonal polynomials and selected modern tools such as splines and wavelets. Methods for approximating functions on the real line are treated first, as they provide the foundations for the methods on the sphere and the ball and are useful for the analysis of time-dependent (spherical) problems. The author then examines the transfer of these spherical methods to problems on the ball, such as the modeling of the Earth’s or the brain’s interior. Specific topics covered include: * the advantages and disadvantages of Fourier, spline, and wavelet methods * theory and numerics of orthogonal polynomials on intervals, spheres, and balls * cubic splines and splines based on reproducing kernels * multiresolution analysis using wavelet...

  10. Generic global regression models for growth prediction of Salmonella in ground pork and pork cuts

    DEFF Research Database (Denmark)

    Buschhardt, Tasja; Hansen, Tina Beck; Bahl, Martin Iain

    2017-01-01

    Introduction and Objectives Models for the prediction of bacterial growth in fresh pork are primarily developed using two-step regression (i.e. primary models followed by secondary models). These models are also generally based on experiments in liquids or ground meat and neglect surface growth....... It has been shown that one-step global regressions can result in more accurate models and that bacterial growth on intact surfaces can substantially differ from growth in liquid culture. Material and Methods We used a global-regression approach to develop predictive models for the growth of Salmonella...... for three pork matrices: on the surface of shoulder (neck) and hind part (ham), and in ground pork. We conducted five experimental trials and inoculated essentially sterile pork pieces with a Salmonella cocktail (n = 192). Inoculated meat was aerobically incubated at 4 °C, 7 °C, 12 °C, and 16 °C for 96 h...

  11. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  12. Random regression test-day model for the analysis of dairy cattle ...

    African Journals Online (AJOL)

    Genetic evaluation of dairy cattle using test-day models is now common internationally. In South Africa a fixed regression test-day model is used to generate breeding values for dairy animals on a routine basis. The model is, however, often criticized for erroneously assuming a standard lactation curve for cows in similar ...

  13. Predicting dissolved oxygen concentration using kernel regression modeling approaches with nonlinear hydro-chemical data.

    Science.gov (United States)

    Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali

    2014-05-01

    Kernel function-based regression models were constructed and applied to a nonlinear hydro-chemical dataset pertaining to surface water for predicting the dissolved oxygen levels. Initial features were selected using nonlinear approach. Nonlinearity in the data was tested using BDS statistics, which revealed the data with nonlinear structure. Kernel ridge regression, kernel principal component regression, kernel partial least squares regression, and support vector regression models were developed using the Gaussian kernel function and their generalization and predictive abilities were compared in terms of several statistical parameters. Model parameters were optimized using the cross-validation procedure. The proposed kernel regression methods successfully captured the nonlinear features of the original data by transforming it to a high dimensional feature space using the kernel function. Performance of all the kernel-based modeling methods used here were comparable both in terms of predictive and generalization abilities. Values of the performance criteria parameters suggested for the adequacy of the constructed models to fit the nonlinear data and their good predictive capabilities.

  14. MULTIPLE LOGISTIC REGRESSION MODEL TO PREDICT RISK FACTORS OF ORAL HEALTH DISEASES

    Directory of Open Access Journals (Sweden)

    Parameshwar V. Pandit

    2012-06-01

    Full Text Available Purpose: To analysis the dependence of oral health diseases i.e. dental caries and periodontal disease on considering the number of risk factors through the applications of logistic regression model. Method: The cross sectional study involves a systematic random sample of 1760 permanent dentition aged between 18-40 years in Dharwad, Karnataka, India. Dharwad is situated in North Karnataka. The mean age was 34.26±7.28. The risk factors of dental caries and periodontal disease were established by multiple logistic regression model using SPSS statistical software. Results: The factors like frequency of brushing, timings of cleaning teeth and type of toothpastes are significant persistent predictors of dental caries and periodontal disease. The log likelihood value of full model is –1013.1364 and Akaike’s Information Criterion (AIC is 1.1752 as compared to reduced regression model are -1019.8106 and 1.1748 respectively for dental caries. But, the log likelihood value of full model is –1085.7876 and AIC is 1.2577 followed by reduced regression model are -1019.8106 and 1.1748 respectively for periodontal disease. The area under Receiver Operating Characteristic (ROC curve for the dental caries is 0.7509 (full model and 0.7447 (reduced model; the ROC for the periodontal disease is 0.6128 (full model and 0.5821 (reduced model. Conclusions: The frequency of brushing, timings of cleaning teeth and type of toothpastes are main signifi cant risk factors of dental caries and periodontal disease. The fitting performance of reduced logistic regression model is slightly a better fit as compared to full logistic regression model in identifying the these risk factors for both dichotomous dental caries and periodontal disease.

  15. Structured Additive Regression Models: An R Interface to BayesX

    Directory of Open Access Journals (Sweden)

    Nikolaus Umlauf

    2015-02-01

    Full Text Available Structured additive regression (STAR models provide a flexible framework for model- ing possible nonlinear effects of covariates: They contain the well established frameworks of generalized linear models and generalized additive models as special cases but also allow a wider class of effects, e.g., for geographical or spatio-temporal data, allowing for specification of complex and realistic models. BayesX is standalone software package providing software for fitting general class of STAR models. Based on a comprehensive open-source regression toolbox written in C++, BayesX uses Bayesian inference for estimating STAR models based on Markov chain Monte Carlo simulation techniques, a mixed model representation of STAR models, or stepwise regression techniques combining penalized least squares estimation with model selection. BayesX not only covers models for responses from univariate exponential families, but also models from less-standard regression situations such as models for multi-categorical responses with either ordered or unordered categories, continuous time survival data, or continuous time multi-state models. This paper presents a new fully interactive R interface to BayesX: the R package R2BayesX. With the new package, STAR models can be conveniently specified using Rs formula language (with some extended terms, fitted using the BayesX binary, represented in R with objects of suitable classes, and finally printed/summarized/plotted. This makes BayesX much more accessible to users familiar with R and adds extensive graphics capabilities for visualizing fitted STAR models. Furthermore, R2BayesX complements the already impressive capabilities for semiparametric regression in R by a comprehensive toolbox comprising in particular more complex response types and alternative inferential procedures such as simulation-based Bayesian inference.

  16. The limiting behavior of the estimated parameters in a misspecified random field regression model

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; Qin, Yu

    This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection...... convenient new uniform convergence results that we propose. This theory may have applications beyond those presented here. Our results indicate that classical statistical inference techniques, in general, works very well for random field regression models in finite samples and that these models succesfully...

  17. Regression models for interval censored survival data: Application to HIV infection in Danish homosexual men

    DEFF Research Database (Denmark)

    Carstensen, Bendix

    1996-01-01

    This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men.......This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men....

  18. The Relationship between Economic Growth and Money Laundering – a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Daniel Rece

    2009-09-01

    Full Text Available This study provides an overview of the relationship between economic growth and money laundering modeled by a least squares function. The report analyzes statistically data collected from USA, Russia, Romania and other eleven European countries, rendering a linear regression model. The study illustrates that 23.7% of the total variance in the regressand (level of money laundering is “explained” by the linear regression model. In our opinion, this model will provide critical auxiliary judgment and decision support for anti-money laundering service systems.

  19. Development of Technology Parameter Towards Shipbuilding Productivity Predictor Using Cubic Spline Approach

    Directory of Open Access Journals (Sweden)

    Bagiyo Suwasono

    2011-05-01

    Full Text Available Ability of production processes associated with state-of-the-art technology, which allows the shipbuilding, is customized with modern equipment. It will give impact to level of productivity and competitiveness. This study proposes a nonparametric regression cubic spline approach with 1 knot, 2 knots, and 3 knots. The application programs Tibco Spotfire S+ showed that a cubic spline with 2 knots (4.25 and 4.50 gave the best result with the value of GCV = 56.21556, and R2 = 94.03%.Estimation result of cubic spline with 2 knots for the PT. Batamec shipyard = 35.61 MH/CGT, PT. Dok & Perkapalan Surabaya = 27.49 MH/CGT, PT. Karimun Sembawang Shipyard = 27.49 MH/CGT, and PT. PAL Indonesia = 19.89 MH/CGT.

  20. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  1. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  2. Regression analysis understanding and building business and economic models using Excel

    CERN Document Server

    Wilson, J Holton

    2012-01-01

    The technique of regression analysis is used so often in business and economics today that an understanding of its use is necessary for almost everyone engaged in the field. This book will teach you the essential elements of building and understanding regression models in a business/economic context in an intuitive manner. The authors take a non-theoretical treatment that is accessible even if you have a limited statistical background. It is specifically designed to teach the correct use of regression, while advising you of its limitations and teaching about common pitfalls. This book describe

  3. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    Science.gov (United States)

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  4. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  5. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  6. Regression models for patient-reported measures having ordered categories recorded on multiple occasions

    Science.gov (United States)

    Preisser, J. S.; Phillips, C.; Perin, J.; Schwartz, T. A.

    2011-01-01

    Objectives The article reviews proportional and partial proportional odds regression for ordered categorical outcomes, such as patient-reported measures, that are frequently used in clinical research in dentistry. Methods The proportional odds regression model for ordinal data is a generalization of ordinary logistic regression for dichotomous responses. When the proportional odds assumption holds for some but not all of the covariates, the lesser known partial proportional odds model is shown to provide a useful extension. Results The ordinal data models are illustrated for the analysis of repeated ordinal outcomes to determine whether the burden associated with sensory alteration following a bilateral sagittal split osteotomy procedure differed for those patients who were given opening exercises only following surgery and those who received sensory retraining exercises in conjunction with standard opening exercises. Conclusions Proportional and partial proportional odds models are broadly applicable to the analysis of cross-sectional and longitudinal ordinal data in dental research. PMID:21070317

  7. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  8. Observer-Based and Regression Model-Based Detection of Emerging Faults in Coal Mills

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Lin, Bao; Jørgensen, Sten Bay

    2006-01-01

    In order to improve the reliability of power plants it is important to detect fault as fast as possible. Doing this it is interesting to find the most efficient method. Since modeling of large scale systems is time consuming it is interesting to compare a model-based method with data driven ones....... In this paper three different fault detection approaches are compared using a example of a coal mill, where a fault emerges. The compared methods are based on: an optimal unknown input observer, static and dynamic regression model-based detections. The conclusion on the comparison is that observer-based scheme...... detects the fault 13 samples earlier than the dynamic regression model-based method, and that the static regression based method is not usable due to generation of far too many false detections....

  9. A smoothing algorithm using cubic spline functions

    Science.gov (United States)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  10. Modeling and prediction of Turkey's electricity consumption using Support Vector Regression

    International Nuclear Information System (INIS)

    Kavaklioglu, Kadir

    2011-01-01

    Support Vector Regression (SVR) methodology is used to model and predict Turkey's electricity consumption. Among various SVR formalisms, ε-SVR method was used since the training pattern set was relatively small. Electricity consumption is modeled as a function of socio-economic indicators such as population, Gross National Product, imports and exports. In order to facilitate future predictions of electricity consumption, a separate SVR model was created for each of the input variables using their current and past values; and these models were combined to yield consumption prediction values. A grid search for the model parameters was performed to find the best ε-SVR model for each variable based on Root Mean Square Error. Electricity consumption of Turkey is predicted until 2026 using data from 1975 to 2006. The results show that electricity consumption can be modeled using Support Vector Regression and the models can be used to predict future electricity consumption. (author)

  11. Multiple regression models for the prediction of the maximum obtainable thermal efficiency of organic Rankine cycles

    DEFF Research Database (Denmark)

    Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit

    2014-01-01

    to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...... conditions of the process. Hundreds of optimised cases with varied design parameters are used as observations in four multiple regression analyses. We analyse the model assumptions, prediction abilities and extrapolations, and compare the results with recent studies in the literature. The models...

  12. Bayes Wavelet Regression Approach to Solve Problems in Multivariable Calibration Modeling

    Directory of Open Access Journals (Sweden)

    Setiawan Setiawan

    2010-05-01

    Full Text Available In the multiple regression modeling, a serious problems would arise if the independent variables are correlated among each other (the problem of ill conditioned and the number of observations is much smaller than the number of independent variables (the problem of singularity. Bayes Regression (BR is an approach that can be used to solve the problem of ill conditioned, but computing constraints will be experienced, so pre-processing methods will be necessary in the form of dimensional reduction of independent variables. The results of empirical studies and literature shows that the discrete wavelet transform (WT gives estimation results of regression model which is better than the other preprocessing methods. This experiment will study a combination of BR with WT as pre-processing method to solve the problems ill conditioned and singularities. One application of calibration in the field of chemistry is relationship modeling between the concentration of active substance as measured by High Performance Liquid Chromatography (HPLC with Fourier Transform Infrared (FTIR absorbance spectrum. Spectrum pattern is expected to predict the value of the concentration of active substance. The exploration of Continuum Regression Wavelet Transform (CR-WT, and Partial Least Squares Regression Wavelet Transform (PLS-WT, and Bayes Regression Wavelet Transform (BR-WT shows that the BR-WT has a good performance. BR-WT is superior than PLS-WT method, and relatively is as good as CR-WT method.

  13. Improved model of the retardance in citric acid coated ferrofluids using stepwise regression

    Science.gov (United States)

    Lin, J. F.; Qiu, X. R.

    2017-06-01

    Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.

  14. Bias and Uncertainty in Regression-Calibrated Models of Groundwater Flow in Heterogeneous Media

    DEFF Research Database (Denmark)

    Cooley, R.L.; Christensen, Steen

    2006-01-01

    small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate θ* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear...... are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis....

  15. INTRODUCTION TO A COMBINED MULTIPLE LINEAR REGRESSION AND ARMA MODELING APPROACH FOR BEACH BACTERIA PREDICTION

    Science.gov (United States)

    Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...

  16. Genomic prediction based on data from three layer lines using non-linear regression models

    NARCIS (Netherlands)

    Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.

    2014-01-01

    Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate

  17. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  18. A national fine spatial scale land-use regression model for ozone

    NARCIS (Netherlands)

    Kerckhoffs, Jules|info:eu-repo/dai/nl/411260502; Wang, Meng|info:eu-repo/dai/nl/345480279; Meliefste, Kees; Malmqvist, Ebba; Fischer, Paul; Janssen, Nicole A H; Beelen, Rob|info:eu-repo/dai/nl/30483100X; Hoek, Gerard|info:eu-repo/dai/nl/069553475

    Uncertainty about health effects of long-term ozone exposure remains. Land use regression (LUR) models have been used successfully for modeling fine scale spatial variation of primary pollutants but very limited for ozone. Our objective was to assess the feasibility of developing a national LUR

  19. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  20. Determining factors influencing survival of breast cancer by fuzzy logistic regression model.

    Science.gov (United States)

    Nikbakht, Roya; Bahrampour, Abbas

    2017-01-01

    Fuzzy logistic regression model can be used for determining influential factors of disease. This study explores the important factors of actual predictive survival factors of breast cancer's patients. We used breast cancer data which collected by cancer registry of Kerman University of Medical Sciences during the period of 2000-2007. The variables such as morphology, grade, age, and treatments (surgery, radiotherapy, and chemotherapy) were applied in the fuzzy logistic regression model. Performance of model was determined in terms of mean degree of membership (MDM). The study results showed that almost 41% of patients were in neoplasm and malignant group and more than two-third of them were still alive after 5-year follow-up. Based on the fuzzy logistic model, the most important factors influencing survival were chemotherapy, morphology, and radiotherapy, respectively. Furthermore, the MDM criteria show that the fuzzy logistic regression have a good fit on the data (MDM = 0.86). Fuzzy logistic regression model showed that chemotherapy is more important than radiotherapy in survival of patients with breast cancer. In addition, another ability of this model is calculating possibilistic odds of survival in cancer patients. The results of this study can be applied in clinical research. Furthermore, there are few studies which applied the fuzzy logistic models. Furthermore, we recommend using this model in various research areas.

  1. Fitting multistate transition models with autoregressive logistic regression : Supervised exercise in intermittent claudication

    NARCIS (Netherlands)

    de Vries, S O; Fidler, Vaclav; Kuipers, Wietze D; Hunink, Maria G M

    1998-01-01

    The purpose of this study was to develop a model that predicts the outcome of supervised exercise for intermittent claudication. The authors present an example of the use of autoregressive logistic regression for modeling observed longitudinal data. Data were collected from 329 participants in a

  2. Logistic regression models of factors influencing the location of bioenergy and biofuels plants

    Science.gov (United States)

    T.M. Young; R.L. Zaretzki; J.H. Perdue; F.M. Guess; X. Liu

    2011-01-01

    Logistic regression models were developed to identify significant factors that influence the location of existing wood-using bioenergy/biofuels plants and traditional wood-using facilities. Logistic models provided quantitative insight for variables influencing the location of woody biomass-using facilities. Availability of "thinnings to a basal area of 31.7m2/ha...

  3. Profile-driven regression for modeling and runtime optimization of mobile networks

    DEFF Research Database (Denmark)

    McClary, Dan; Syrotiuk, Violet; Kulahci, Murat

    2010-01-01

    of throughput in a mobile ad hoc network, a self-organizing collection of mobile wireless nodes without any fixed infrastructure. The intermediate models generated in profile-driven regression are used to fit an overall model of throughput, and are also used to optimize controllable factors at runtime. Unlike...

  4. The use of logistic regression in modelling the distributions of bird ...

    African Journals Online (AJOL)

    The method of logistic regression was used to model the observed geographical distribution patterns of bird species in Swaziland in relation to a set of environmental variables. Reporting rates derived from bird atlas data are used as an index of population densities. This is justified in part by the success of the modelling ...

  5. The use of logistic regression in modelling the distributions of bird ...

    African Journals Online (AJOL)

    The method of logistic regression was used to model the observed geographical distribution patterns of bird species in Swaziland in relation to a set of environmental variables. Reporting rates derived from brrd atlas data are used as an index of population densities. This is justified in part by the success of the modelling ...

  6. Random regression models in the evaluation of the growth curve of Simbrasil beef cattle

    NARCIS (Netherlands)

    Mota, M.; Marques, F.A.; Lopes, P.S.; Hidalgo, A.M.

    2013-01-01

    Random regression models were used to estimate the types and orders of random effects of (co)variance functions in the description of the growth trajectory of the Simbrasil cattle breed. Records for 7049 animals totaling 18,677 individual weighings were submitted to 15 models from the third to the

  7. Regression models for interval censored survival data: Application to HIV infection in Danish homosexual men

    DEFF Research Database (Denmark)

    Carstensen, Bendix

    1996-01-01

    This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men....

  8. Longitudinal beta regression models for analyzing health-related quality of life scores over time

    Directory of Open Access Journals (Sweden)

    Hunger Matthias

    2012-09-01

    Full Text Available Abstract Background Health-related quality of life (HRQL has become an increasingly important outcome parameter in clinical trials and epidemiological research. HRQL scores are typically bounded at both ends of the scale and often highly skewed. Several regression techniques have been proposed to model such data in cross-sectional studies, however, methods applicable in longitudinal research are less well researched. This study examined the use of beta regression models for analyzing longitudinal HRQL data using two empirical examples with distributional features typically encountered in practice. Methods We used SF-6D utility data from a German older age cohort study and stroke-specific HRQL data from a randomized controlled trial. We described the conceptual differences between mixed and marginal beta regression models and compared both models to the commonly used linear mixed model in terms of overall fit and predictive accuracy. Results At any measurement time, the beta distribution fitted the SF-6D utility data and stroke-specific HRQL data better than the normal distribution. The mixed beta model showed better likelihood-based fit statistics than the linear mixed model and respected the boundedness of the outcome variable. However, it tended to underestimate the true mean at the upper part of the distribution. Adjusted group means from marginal beta model and linear mixed model were nearly identical but differences could be observed with respect to standard errors. Conclusions Understanding the conceptual differences between mixed and marginal beta regression models is important for their proper use in the analysis of longitudinal HRQL data. Beta regression fits the typical distribution of HRQL data better than linear mixed models, however, if focus is on estimating group mean scores rather than making individual predictions, the two methods might not differ substantially.

  9. Deep ensemble learning of sparse regression models for brain disease diagnosis.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2017-04-01

    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Scripted Bodies and Spline Driven Animation

    DEFF Research Database (Denmark)

    Erleben, Kenny; Henriksen, Knud

    2002-01-01

    In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain of the sp......In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...

  11. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    Science.gov (United States)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  12. FlexMix: A General Framework for Finite Mixture Models and Latent Class Regression in R

    Directory of Open Access Journals (Sweden)

    Friedrich Leisch

    2004-10-01

    Full Text Available FlexMix implements a general framework for fitting discrete mixtures of regression models in the R statistical computing environment: three variants of the EM algorithm can be used for parameter estimation, regressors and responses may be multivariate with arbitrary dimension, data may be grouped, e.g., to account for multiple observations per individual, the usual formula interface of the S language is used for convenient model specification, and a modular concept of driver functions allows to interface many different types of regression models. Existing drivers implement mixtures of standard linear models, generalized linear models and model-based clustering. FlexMix provides the E-step and all data handling, while the M-step can be supplied by the user to easily define new models.

  13. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  14. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  15. Schwarz and multilevel methods for quadratic spline collocation

    Energy Technology Data Exchange (ETDEWEB)

    Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  16. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  17. Construction of risk prediction model of type 2 diabetes mellitus based on logistic regression

    Directory of Open Access Journals (Sweden)

    Li Jian

    2017-01-01

    Full Text Available Objective: to construct multi factor prediction model for the individual risk of T2DM, and to explore new ideas for early warning, prevention and personalized health services for T2DM. Methods: using logistic regression techniques to screen the risk factors for T2DM and construct the risk prediction model of T2DM. Results: Male’s risk prediction model logistic regression equation: logit(P=BMI × 0.735+ vegetables × (−0.671 + age × 0.838+ diastolic pressure × 0.296+ physical activity× (−2.287 + sleep ×(−0.009 +smoking ×0.214; Female’s risk prediction model logistic regression equation: logit(P=BMI ×1.979+ vegetables× (−0.292 + age × 1.355+ diastolic pressure× 0.522+ physical activity × (−2.287 + sleep × (−0.010.The area under the ROC curve of male was 0.83, the sensitivity was 0.72, the specificity was 0.86, the area under the ROC curve of female was 0.84, the sensitivity was 0.75, the specificity was 0.90. Conclusion: This study model data is from a compared study of nested case, the risk prediction model has been established by using the more mature logistic regression techniques, and the model is higher predictive sensitivity, specificity and stability.

  18. Risk stratification for prognosis in intracerebral hemorrhage: A decision tree model and logistic regression

    Directory of Open Access Journals (Sweden)

    Gang WU

    2016-01-01

    Full Text Available Objective  To analyze the risk factors for prognosis in intracerebral hemorrhage using decision tree (classification and regression tree, CART model and logistic regression model. Methods  CART model and logistic regression model were established according to the risk factors for prognosis of patients with cerebral hemorrhage. The differences in the results were compared between the two methods. Results  Logistic regression analyses showed that hematoma volume (OR-value 0.953, initial Glasgow Coma Scale (GCS score (OR-value 1.210, pulmonary infection (OR-value 0.295, and basal ganglia hemorrhage (OR-value 0.336 were the risk factors for the prognosis of cerebral hemorrhage. The results of CART analysis showed that volume of hematoma and initial GCS score were the main factors affecting the prognosis of cerebral hemorrhage. The effects of two models on the prognosis of cerebral hemorrhage were similar (Z-value 0.402, P=0.688. Conclusions  CART model has a similar value to that of logistic model in judging the prognosis of cerebral hemorrhage, and it is characterized by using transactional analysis between the risk factors, and it is more intuitive. DOI: 10.11855/j.issn.0577-7402.2015.12.13

  19. Attribute Selection Impact on Linear and Nonlinear Regression Models for Crop Yield Prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-01-01

    Full Text Available Efficient cropping requires yield estimation for each involved crop, where data-driven models are commonly applied. In recent years, some data-driven modeling technique comparisons have been made, looking for the best model to yield prediction. However, attributes are usually selected based on expertise assessment or in dimensionality reduction algorithms. A fairer comparison should include the best subset of features for each regression technique; an evaluation including several crops is preferred. This paper evaluates the most common data-driven modeling techniques applied to yield prediction, using a complete method to define the best attribute subset for each model. Multiple linear regression, stepwise linear regression, M5′ regression trees, and artificial neural networks (ANN were ranked. The models were built using real data of eight crops sowed in an irrigation module of Mexico. To validate the models, three accuracy metrics were used: the root relative square error (RRSE, relative mean absolute error (RMAE, and correlation factor (R. The results show that ANNs are more consistent in the best attribute subset composition between the learning and the training stages, obtaining the lowest average RRSE (86.04%, lowest average RMAE (8.75%, and the highest average correlation factor (0.63.

  20. Regional regression models of watershed suspended-sediment discharge for the eastern United States

    Science.gov (United States)

    Roman, David C.; Vogel, Richard M.; Schwarz, Gregory E.

    2012-01-01

    Estimates of mean annual watershed sediment discharge, derived from long-term measurements of suspended-sediment concentration and streamflow, often are not available at locations of interest. The goal of this study was to develop multivariate regression models to enable prediction of mean annual suspended-sediment discharge from available basin characteristics useful for most ungaged river locations in the eastern United States. The models are based on long-term mean sediment discharge estimates and explanatory variables obtained from a combined dataset of 1201 US Geological Survey (USGS) stations derived from a SPAtially Referenced Regression on Watershed attributes (SPARROW) study and the Geospatial Attributes of Gages for Evaluating Streamflow (GAGES) database. The resulting regional regression models summarized for major US water resources regions 1–8, exhibited prediction R2 values ranging from 76.9% to 92.7% and corresponding average model prediction errors ranging from 56.5% to 124.3%. Results from cross-validation experiments suggest that a majority of the models will perform similarly to calibration runs. The 36-parameter regional regression models also outperformed a 16-parameter national SPARROW model of suspended-sediment discharge and indicate that mean annual sediment loads in the eastern United States generally correlates with a combination of basin area, land use patterns, seasonal precipitation, soil composition, hydrologic modification, and to a lesser extent, topography.

  1. Non-parametric genetic prediction of complex traits with latent Dirichlet process regression models.

    Science.gov (United States)

    Zeng, Ping; Zhou, Xiang

    2017-09-06

    Using genotype data to perform accurate genetic prediction of complex traits can facilitate genomic selection in animal and plant breeding programs, and can aid in the development of personalized medicine in humans. Because most complex traits have a polygenic architecture, accurate genetic prediction often requires modeling all genetic variants together via polygenic methods. Here, we develop such a polygenic method, which we refer to as the latent Dirichlet process regression model. Dirichlet process regression is non-parametric in nature, relies on the Dirichlet process to flexibly and adaptively model the effect size distribution, and thus enjoys robust prediction performance across a broad spectrum of genetic architectures. We compare Dirichlet process regression with several commonly used prediction methods with simulations. We further apply Dirichlet process regression to predict gene expressions, to conduct PrediXcan based gene set test, to perform genomic selection of four traits in two species, and to predict eight complex traits in a human cohort.Genetic prediction of complex traits with polygenic architecture has wide application from animal breeding to disease prevention. Here, Zeng and Zhou develop a non-parametric genetic prediction method based on latent Dirichlet Process regression models.

  2. Multiple regression models for energy use in air-conditioned office buildings in different climates

    International Nuclear Information System (INIS)

    Lam, Joseph C.; Wan, Kevin K.W.; Liu Dalong; Tsang, C.L.

    2010-01-01

    An attempt was made to develop multiple regression models for office buildings in the five major climates in China - severe cold, cold, hot summer and cold winter, mild, and hot summer and warm winter. A total of 12 key building design variables were identified through parametric and sensitivity analysis, and considered as inputs in the regression models. The coefficient of determination R 2 varies from 0.89 in Harbin to 0.97 in Kunming, indicating that 89-97% of the variations in annual building energy use can be explained by the changes in the 12 parameters. A pseudo-random number generator based on three simple multiplicative congruential generators was employed to generate random designs for evaluation of the regression models. The difference between regression-predicted and DOE-simulated annual building energy use are largely within 10%. It is envisaged that the regression models developed can be used to estimate the likely energy savings/penalty during the initial design stage when different building schemes and design concepts are being considered.

  3. Quantile regression for censored mixed-effects models with applications to HIV studies.

    Science.gov (United States)

    Lachos, Victor H; Chen, Ming-Hui; Abanto-Valle, Carlos A; Azevedo, Caio L N

    HIV RNA viral load measures are often subjected to some upper and lower detection limits depending on the quantification assays. Hence, the responses are either left or right censored. Linear/nonlinear mixed-effects models, with slight modifications to accommodate censoring, are routinely used to analyze this type of data. Usually, the inference procedures are based on normality (or elliptical distribution) assumptions for the random terms. However, those analyses might not provide robust inference when the distribution assumptions are questionable. In this paper, we discuss a fully Bayesian quantile regression inference using Markov Chain Monte Carlo (MCMC) methods for longitudinal data models with random effects and censored responses. Compared to the conventional mean regression approach, quantile regression can characterize the entire conditional distribution of the outcome variable, and is more robust to outliers and misspecification of the error distribution. Under the assumption that the error term follows an asymmetric Laplace distribution, we develop a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at the p th level, with the median regression ( p = 0.5) as a special case. The proposed procedures are illustrated with two HIV AIDS studies on viral loads that were initially analyzed using the typical normal (censored) mean regression mixed-effects models, as well as a simulation study.

  4. LINEAR REGRESSION MODEL ESTİMATİON FOR RIGHT CENSORED DATA

    Directory of Open Access Journals (Sweden)

    Ersin Yılmaz

    2016-05-01

    Full Text Available In this study, firstly we will define a right censored data. If we say shortly right-censored data is censoring values that above the exact line. This may be related with scaling device. And then  we will use response variable acquainted from right-censored explanatory variables. Then the linear regression model will be estimated. For censored data’s existence, Kaplan-Meier weights will be used for  the estimation of the model. With the weights regression model  will be consistent and unbiased with that.   And also there is a method for the censored data that is a semi parametric regression and this method also give  useful results  for censored data too. This study also might be useful for the health studies because of the censored data used in medical issues generally.

  5. Development of multiple linear regression models for predicting the stormwater quality of urban sub-watersheds.

    Science.gov (United States)

    Arora, Amarpreet S; Reddy, Akepati S

    2014-01-01

    Stormwater management at urban sub-watershed level has been envisioned to include stormwater collection, treatment, and disposal of treated stormwater through groundwater recharging. Sizing, operation and control of the stormwater management systems require information on the quantities and characteristics of the stormwater generated. Stormwater characteristics depend upon dry spell between two successive rainfall events, intensity of rainfall and watershed characteristics. However, sampling and analysis of stormwater, spanning only few rainfall events, provides insufficient information on the characteristics. An attempt has been made in the present study to assess the stormwater characteristics through regression modeling. Stormwater of five sub-watersheds of Patiala city were sampled and analyzed. The results obtained were related with the antecedent dry periods and with the intensity of the rainfall event through regression modeling. Obtained regression models were used to assess the stormwater quality for various antecedent dry periods and rainfall event intensities.

  6. Experimental evaluation of regression model-based walking speed estimation using lower body-mounted IMU.

    Science.gov (United States)

    Zihajehzadeh, Shaghayegh; Park, Edward J

    2016-08-01

    This study provides a concurrent comparison of regression model-based walking speed estimation accuracy using lower body mounted inertial sensors. The comparison is based on different sets of variables, features, mounting locations and regression methods. An experimental evaluation was performed on 15 healthy subjects during free walking trials. Our results show better accuracy of Gaussian process regression compared to least square regression using Lasso. Among the variables, external acceleration tends to provide improved accuracy. By using both time-domain and frequency-domain features, waist and ankle-mounted sensors result in similar accuracies: 4.5% for the waist and 4.9% for the ankle. When using only frequency-domain features, estimation accuracy based on a waist-mounted sensor suffers more compared to the one from ankle.

  7. CANFIS: A non-linear regression procedure to produce statistical air-quality forecast models

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, W.R.; Montpetit, J. [Environment Canada, Downsview, Ontario (Canada). Meteorological Research Branch; Pudykiewicz, J. [Environment Canada, Dorval, Quebec (Canada)

    1997-12-31

    Statistical models for forecasts of environmental variables can provide a good trade-off between significance and precision in return for substantial saving of computer execution time. Recent non-linear regression techniques give significantly increased accuracy compared to traditional linear regression methods. Two are Classification and Regression Trees (CART) and the Neuro-Fuzzy Inference System (NFIS). Both can model predict and distributions, including the tails, with much better accuracy than linear regression. Given a learning data set of matched predict and predictors, CART regression produces a non-linear, tree-based, piecewise-continuous model of the predict and data. Its variance-minimizing procedure optimizes the task of predictor selection, often greatly reducing initial data dimensionality. NFIS reduces dimensionality by a procedure known as subtractive clustering but it does not of itself eliminate predictors. Over-lapping coverage in predictor space is enhanced by NFIS with a Gaussian membership function for each cluster component. Coefficients for a continuous response model based on the fuzzified cluster centers are obtained by a least-squares estimation procedure. CANFIS is a two-stage data-modeling technique that combines the strength of CART to optimize the process of selecting predictors from a large pool of potential predictors with the modeling strength of NFIS. A CANFIS model requires negligible computer time to run. CANFIS models for ground-level O{sub 3}, particulates, and other pollutants will be produced for each of about 100 Canadian sites. The air-quality models will run twice daily using a small number of predictors isolated from a large pool of upstream and local Lagrangian potential predictors.

  8. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    Science.gov (United States)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  10. Multiple regression analysis in modelling of carbon dioxide emissions by energy consumption use in Malaysia

    Science.gov (United States)

    Keat, Sim Chong; Chun, Beh Boon; San, Lim Hwee; Jafri, Mohd Zubir Mat

    2015-04-01

    Climate change due to carbon dioxide (CO2) emissions is one of the most complex challenges threatening our planet. This issue considered as a great and international concern that primary attributed from different fossil fuels. In this paper, regression model is used for analyzing the causal relationship among CO2 emissions based on the energy consumption in Malaysia using time series data for the period of 1980-2010. The equations were developed using regression model based on the eight major sources that contribute to the CO2 emissions such as non energy, Liquefied Petroleum Gas (LPG), diesel, kerosene, refinery gas, Aviation Turbine Fuel (ATF) and Aviation Gasoline (AV Gas), fuel oil and motor petrol. The related data partly used for predict the regression model (1980-2000) and partly used for validate the regression model (2001-2010). The results of the prediction model with the measured data showed a high correlation coefficient (R2=0.9544), indicating the model's accuracy and efficiency. These results are accurate and can be used in early warning of the population to comply with air quality standards.

  11. Combination of supervised and semi-supervised regression models for improved unbiased estimation

    DEFF Research Database (Denmark)

    Arenas-Garía, Jeronimo; Moriana-Varo, Carlos; Larsen, Jan

    2010-01-01

    In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised and semisupervi......In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised...

  12. Methods and applications of linear models regression and the analysis of variance

    CERN Document Server

    Hocking, Ronald R

    2013-01-01

    Praise for the Second Edition"An essential desktop reference book . . . it should definitely be on your bookshelf." -Technometrics A thoroughly updated book, Methods and Applications of Linear Models: Regression and the Analysis of Variance, Third Edition features innovative approaches to understanding and working with models and theory of linear regression. The Third Edition provides readers with the necessary theoretical concepts, which are presented using intuitive ideas rather than complicated proofs, to describe the inference that is appropriate for the methods being discussed. The book

  13. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have

  14. Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation

    Directory of Open Access Journals (Sweden)

    WEN Chaobin

    2015-01-01

    Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1~2 times and the positional error is maintained from 100~200 m.

  15. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data

    Science.gov (United States)

    Wilson, Barry T.; Knight, Joseph F.; McRoberts, Ronald E.

    2018-03-01

    Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several methods have previously been developed for use with finer temporal resolution imagery (e.g. AVHRR and MODIS), including image compositing and harmonic regression using Fourier series. The manuscript presents a study, using Minnesota, USA during the years 2009-2013 as the study area and timeframe. The study examined the relative predictive power of land cover models, in particular those related to tree cover, using predictor variables based solely on composite imagery versus those using estimated harmonic regression coefficients. The study used two common non-parametric modeling approaches (i.e. k-nearest neighbors and random forests) for fitting classification and regression models of multiple attributes measured on USFS Forest Inventory and Analysis plots using all available Landsat imagery for the study area and timeframe. The estimated Fourier coefficients developed by harmonic regression of tasseled cap transformation time series data were shown to be correlated with land cover, including tree cover. Regression models using estimated Fourier coefficients as predictor variables showed a two- to threefold increase in explained variance for a small set of continuous response variables, relative to comparable models using monthly image composites. Similarly, the overall accuracies of classification models using the estimated Fourier coefficients were approximately 10-20 percentage points higher than the models using the image composites, with corresponding individual class accuracies between six and 45 percentage points higher.

  16. The use of splines to analyze scanning tunneling microscopy data

    NARCIS (Netherlands)

    Wormeester, Herbert; Kip, Gerhardus A.M.; Sasse, A.G.B.M.; van Midden, H.J.P.

    1990-01-01

    Scanning tunneling microscopy (STM) requires a two‐dimensional (2D) image displaying technique for its interpretation. The flexibility and global approximation properties of splines, characteristic of a solid data reduction method as known from cubic spline interpolation, is called for. Splines were

  17. Multiple single nucleotide polymorphism analysis using penalized regression in nonlinear mixed-effect pharmacokinetic models.

    Science.gov (United States)

    Bertrand, Julie; Balding, David J

    2013-03-01

    Studies on the influence of single nucleotide polymorphisms (SNPs) on drug pharmacokinetics (PK) have usually been limited to the analysis of observed drug concentration or area under the concentration versus time curve. Nonlinear mixed effects models enable analysis of the entire curve, even for sparse data, but until recently, there has been no systematic method to examine the effects of multiple SNPs on the model parameters. The aim of this study was to assess different penalized regression methods for including SNPs in PK analyses. A total of 200 data sets were simulated under both the null and an alternative hypothesis. In each data set for each of the 300 participants, a PK profile at six sampling times was simulated and 1227 genotypes were generated through haplotypes. After modelling the PK profiles using an expectation maximization algorithm, genetic association with individual parameters was investigated using the following approaches: (i) a classical stepwise approach, (ii) ridge regression modified to include a test, (iii) Lasso and (iv) a generalization of Lasso, the HyperLasso. Penalized regression approaches are often much faster than the stepwise approach. There are significantly fewer true positives for ridge regression than for the stepwise procedure and HyperLasso. The higher number of true positives in the stepwise procedure was accompanied by a higher count of false positives (not significant). We find that all approaches except ridge regression show similar power, but penalized regression can be much less computationally demanding. We conclude that penalized regression should be preferred over stepwise procedures for PK analyses with a large panel of genetic covariates.

  18. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  19. Replica analysis of overfitting in regression models for time-to-event data

    Science.gov (United States)

    Coolen, A. C. C.; Barrett, J. E.; Paga, P.; Perez-Vicente, C. J.

    2017-09-01

    Overfitting, which happens when the number of parameters in a model is too large compared to the number of data points available for determining these parameters, is a serious and growing problem in survival analysis. While modern medicine presents us with data of unprecedented dimensionality, these data cannot yet be used effectively for clinical outcome prediction. Standard error measures in maximum likelihood regression, such as p-values and z-scores, are blind to overfitting, and even for Cox’s proportional hazards model (the main tool of medical statisticians), one finds in literature only rules of thumb on the number of samples required to avoid overfitting. In this paper we present a mathematical theory of overfitting in regression models for time-to-event data, which aims to increase our quantitative understanding of the problem and provide practical tools with which to correct regression outcomes for the impact of overfitting. It is based on the replica method, a statistical mechanical technique for the analysis of heterogeneous many-variable systems that has been used successfully for several decades in physics, biology, and computer science, but not yet in medical statistics. We develop the theory initially for arbitrary regression models for time-to-event data, and verify its predictions in detail for the popular Cox model.

  20. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators

    International Nuclear Information System (INIS)

    Alvarez R, J.T.; Morales P, R.

    1992-06-01

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ( 90 Sr/ 90 Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)