WorldWideScience

Sample records for polynomial regression models

  1. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  2. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  3. Reduction of the number of parameters needed for a polynomial random regression test-day model

    NARCIS (Netherlands)

    Pool, M.H.; Meuwissen, T.H.E.

    2000-01-01

    Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation

  4. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    Science.gov (United States)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  5. Random regression models to estimate genetic parameters for milk production of Guzerat cows using orthogonal Legendre polynomials

    Directory of Open Access Journals (Sweden)

    Maria Gabriela Campolina Diniz Peixoto

    2014-05-01

    Full Text Available The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524 of test-day milk yield (TDMY from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects, whereas the contemporary group, calving age (linear and quadratic effects and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.

  6. transformation of independent variables in polynomial regression ...

    African Journals Online (AJOL)

    Ada

    preferable when possible to work with a simple functional form in transformed variables rather than with a more complicated form in the original variables. In this paper, it is shown that linear transformations applied to independent variables in polynomial regression models affect the t ratio and hence the statistical ...

  7. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  8. REGSTEP - stepwise multivariate polynomial regression with singular extensions

    International Nuclear Information System (INIS)

    Davierwalla, D.M.

    1977-09-01

    The program REGSTEP determines a polynomial approximation, in the least squares sense, to tabulated data. The polynomial may be univariate or multivariate. The computational method is that of stepwise regression. A variable is inserted into the regression basis if it is significant with respect to an appropriate F-test at a preselected risk level. In addition, should a variable already in the basis, become nonsignificant (again with respect to an appropriate F-test) after the entry of a new variable, it is expelled from the model. Thus only significant variables are retained in the model. Although written expressly to be incorporated into CORCOD, a code for predicting nuclear cross sections for given values of power, temperature, void fractions, Boron content etc. there is nothing to limit the use of REGSTEP to nuclear applications, as the examples demonstrate. A separate version has been incorporated into RSYST for the general user. (Auth.)

  9. Modelling the breeding of Aedes Albopictus species in an urban area in Pulau Pinang using polynomial regression

    Science.gov (United States)

    Salleh, Nur Hanim Mohd; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Saad, Ahmad Ramli; Sulaiman, Husna Mahirah; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Polynomial regression is used to model a curvilinear relationship between a response variable and one or more predictor variables. It is a form of a least squares linear regression model that predicts a single response variable by decomposing the predictor variables into an nth order polynomial. In a curvilinear relationship, each curve has a number of extreme points equal to the highest order term in the polynomial. A quadratic model will have either a single maximum or minimum, whereas a cubic model has both a relative maximum and a minimum. This study used quadratic modeling techniques to analyze the effects of environmental factors: temperature, relative humidity, and rainfall distribution on the breeding of Aedes albopictus, a type of Aedes mosquito. Data were collected at an urban area in south-west Penang from September 2010 until January 2011. The results indicated that the breeding of Aedes albopictus in the urban area is influenced by all three environmental characteristics. The number of mosquito eggs is estimated to reach a maximum value at a medium temperature, a medium relative humidity and a high rainfall distribution.

  10. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  11. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    Science.gov (United States)

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  12. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  13. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  14. Multivariate Local Polynomial Regression with Application to Shenzhen Component Index

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2011-01-01

    Full Text Available This study attempts to characterize and predict stock index series in Shenzhen stock market using the concepts of multivariate local polynomial regression. Based on nonlinearity and chaos of the stock index time series, multivariate local polynomial prediction methods and univariate local polynomial prediction method, all of which use the concept of phase space reconstruction according to Takens' Theorem, are considered. To fit the stock index series, the single series changes into bivariate series. To evaluate the results, the multivariate predictor for bivariate time series based on multivariate local polynomial model is compared with univariate predictor with the same Shenzhen stock index data. The numerical results obtained by Shenzhen component index show that the prediction mean squared error of the multivariate predictor is much smaller than the univariate one and is much better than the existed three methods. Even if the last half of the training data are used in the multivariate predictor, the prediction mean squared error is smaller than the univariate predictor. Multivariate local polynomial prediction model for nonsingle time series is a useful tool for stock market price prediction.

  15. Prediction of the temperature of the atmosphere of the primary containment: comparison between neural networks and polynomial regression

    International Nuclear Information System (INIS)

    Alvarez Huerta, A.; Gonzalez Miguelez, R.; Garcia Metola, D.; Noriega Gonzalez, A.

    2011-01-01

    The modelization is carried out through two different techniques, a conventional polynomial regression and other based on an approach by neural networks artificial. He is a comparison between the quality of the forecast would make different models based on the polynomial regression and neural network with generalization by Bayesian regulation, using the indicators of the root of the mean square error and the coefficient of determination, in view of the results, the neural network generates a prediction more accurate and reliable than the polynomial regression.

  16. Function approximation with polynomial regression slines

    International Nuclear Information System (INIS)

    Urbanski, P.

    1996-01-01

    Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)

  17. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    Science.gov (United States)

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  18. Evaluating the Performance of Polynomial Regression Method with Different Parameters during Color Characterization

    Directory of Open Access Journals (Sweden)

    Bangyong Sun

    2014-01-01

    Full Text Available The polynomial regression method is employed to calculate the relationship of device color space and CIE color space for color characterization, and the performance of different expressions with specific parameters is evaluated. Firstly, the polynomial equation for color conversion is established and the computation of polynomial coefficients is analysed. And then different forms of polynomial equations are used to calculate the RGB and CMYK’s CIE color values, while the corresponding color errors are compared. At last, an optimal polynomial expression is obtained by analysing several related parameters during color conversion, including polynomial numbers, the degree of polynomial terms, the selection of CIE visual spaces, and the linearization.

  19. Adaptive robust polynomial regression for power curve modeling with application to wind power forecasting

    DEFF Research Database (Denmark)

    Xu, Man; Pinson, Pierre; Lu, Zongxiang

    2016-01-01

    of the lack of time adaptivity. In this paper, a refined local polynomial regression algorithm is proposed to yield an adaptive robust model of the time-varying scattered power curve for forecasting applications. The time adaptivity of the algorithm is considered with a new data-driven bandwidth selection......Wind farm power curve modeling, which characterizes the relationship between meteorological variables and power production, is a crucial procedure for wind power forecasting. In many cases, power curve modeling is more impacted by the limited quality of input data rather than the stochastic nature...... of the energy conversion process. Such nature may be due the varying wind conditions, aging and state of the turbines, etc. And, an equivalent steady-state power curve, estimated under normal operating conditions with the intention to filter abnormal data, is not sufficient to solve the problem because...

  20. Note on Generating Orthogonal Polynomials and Their Application in Solving Complicated Polynomial Regression Tasks

    Czech Academy of Sciences Publication Activity Database

    Knížek, J.; Tichý, Petr; Beránek, L.; Šindelář, Jan; Vojtěšek, B.; Bouchal, P.; Nenutil, R.; Dedík, O.

    2010-01-01

    Roč. 7, č. 10 (2010), s. 48-60 ISSN 0974-5718 Grant - others:GA MZd(CZ) NS9812; GA ČR(CZ) GAP304/10/0868 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10750506 Keywords : polynomial regression * orthogonalization * numerical methods * markers * biomarkers Subject RIV: BA - General Mathematics

  1. Assessing the Multidimensional Relationship Between Medication Beliefs and Adherence in Older Adults With Hypertension Using Polynomial Regression.

    Science.gov (United States)

    Dillon, Paul; Phillips, L Alison; Gallagher, Paul; Smith, Susan M; Stewart, Derek; Cousins, Gráinne

    2018-02-05

    The Necessity-Concerns Framework (NCF) is a multidimensional theory describing the relationship between patients' positive and negative evaluations of their medication which interplay to influence adherence. Most studies evaluating the NCF have failed to account for the multidimensional nature of the theory, placing the separate dimensions of medication "necessity beliefs" and "concerns" onto a single dimension (e.g., the Beliefs about Medicines Questionnaire-difference score model). To assess the multidimensional effect of patient medication beliefs (concerns and necessity beliefs) on medication adherence using polynomial regression with response surface analysis. Community-dwelling older adults >65 years (n = 1,211) presenting their own prescription for antihypertensive medication to 106 community pharmacies in the Republic of Ireland rated their concerns and necessity beliefs to antihypertensive medications at baseline and their adherence to antihypertensive medication at 12 months via structured telephone interview. Confirmatory polynomial regression found the difference-score model to be inaccurate; subsequent exploratory analysis identified a quadratic model to be the best-fitting polynomial model. Adherence was lowest among those with strong medication concerns and weak necessity beliefs, and adherence was greatest for those with weak concerns and strong necessity beliefs (slope β = -0.77, pnecessity beliefs had lower adherence than those with simultaneously low concerns and necessity beliefs (slope β = -0.36, p = .004; curvature β = -0.25, p = .003). The difference-score model fails to account for the potential nonreciprocal effects. Results extend evidence supporting the use of polynomial regression to assess the multidimensional effect of medication beliefs on adherence.

  2. The consistency of ordinary least-squares and generalized least-squares polynomial regression on characterizing the mechanomyographic amplitude versus torque relationship

    International Nuclear Information System (INIS)

    Herda, Trent J; Ryan, Eric D; Costa, Pablo B; DeFreitas, Jason M; Walter, Ashley A; Stout, Jeffrey R; Beck, Travis W; Cramer, Joel T; Housh, Terry J; Weir, Joseph P

    2009-01-01

    The primary purpose of this study was to examine the consistency of ordinary least-squares (OLS) and generalized least-squares (GLS) polynomial regression analyses utilizing linear, quadratic and cubic models on either five or ten data points that characterize the mechanomyographic amplitude (MMG RMS ) versus isometric torque relationship. The secondary purpose was to examine the consistency of OLS and GLS polynomial regression utilizing only linear and quadratic models (excluding cubic responses) on either ten or five data points. Eighteen participants (mean ± SD age = 24 ± 4 yr) completed ten randomly ordered isometric step muscle actions from 5% to 95% of the maximal voluntary contraction (MVC) of the right leg extensors during three separate trials. MMG RMS was recorded from the vastus lateralis during the MVCs and each submaximal muscle action. MMG RMS versus torque relationships were analyzed on a subject-by-subject basis using OLS and GLS polynomial regression. When using ten data points, only 33% and 27% of the subjects were fitted with the same model (utilizing linear, quadratic and cubic models) across all three trials for OLS and GLS, respectively. After eliminating the cubic model, there was an increase to 55% of the subjects being fitted with the same model across all trials for both OLS and GLS regression. Using only five data points (instead of ten data points), 55% of the subjects were fitted with the same model across all trials for OLS and GLS regression. Overall, OLS and GLS polynomial regression models were only able to consistently describe the torque-related patterns of response for MMG RMS in 27–55% of the subjects across three trials. Future studies should examine alternative methods for improving the consistency and reliability of the patterns of response for the MMG RMS versus isometric torque relationship

  3. On the estimation of the degree of regression polynomial

    International Nuclear Information System (INIS)

    Toeroek, Cs.

    1997-01-01

    The mathematical functions most commonly used to model curvature in plots are polynomials. Generally, the higher the degree of the polynomial, the more complex is the trend that its graph can represent. We propose a new statistical-graphical approach based on the discrete projective transformation (DPT) to estimating the degree of polynomial that adequately describes the trend in the plot

  4. Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART

    Science.gov (United States)

    Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.

    2017-12-01

    Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.

  5. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)

  6. Stability analysis of polynomial fuzzy models via polynomial fuzzy Lyapunov functions

    OpenAIRE

    Bernal Reza, Miguel Ángel; Sala, Antonio; JAADARI, ABDELHAFIDH; Guerra, Thierry-Marie

    2011-01-01

    In this paper, the stability of continuous-time polynomial fuzzy models by means of a polynomial generalization of fuzzy Lyapunov functions is studied. Fuzzy Lyapunov functions have been fruitfully used in the literature for local analysis of Takagi-Sugeno models, a particular class of the polynomial fuzzy ones. Based on a recent Taylor-series approach which allows a polynomial fuzzy model to exactly represent a nonlinear model in a compact set of the state space, it is shown that a refinemen...

  7. Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR

    Directory of Open Access Journals (Sweden)

    Maria Cristina Floreno

    1996-05-01

    Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines.

  8. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  9. Identification of Super Phenix steam generator by a simple polynomial model

    International Nuclear Information System (INIS)

    Rousseau, I.

    1981-01-01

    This note suggests a method of identification for the steam generator of the Super-Phenix fast neutron power plant for simple polynomial models. This approach is justified in the selection of the adaptive control. The identification algorithms presented will be applied to multivariable input-output behaviours. The results obtained with the representation in self-regressive form and by simple polynomial models will be compared and the effect of perturbations on the output signal will be tested, in order to select a good identification algorithm for multivariable adaptive regulation [fr

  10. Optimum short-time polynomial regression for signal analysis

    Indian Academy of Sciences (India)

    A Sreenivasa Murthy

    the Proceedings of European Signal Processing Conference. (EUSIPCO) 2008. ... In a seminal paper, Savitzky and Golay [4] showed that short-time polynomial modeling is ...... We next consider a linearly frequency-modulated chirp with an exponentially .... 1 http://www.physionet.org/physiotools/matlab/ECGwaveGen/.

  11. Prediction of the temperature of the atmosphere of the primary containment: comparison between neural networks and polynomial regression; Prediccion de la temperatura de la atmosfera de la contencion primaria: comparativa entre redes neuronales y regresion polinomial

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez Huerta, A.; Gonzalez Miguelez, R.; Garcia Metola, D.; Noriega Gonzalez, A.

    2011-07-01

    The modelization is carried out through two different techniques, a conventional polynomial regression and other based on an approach by neural networks artificial. He is a comparison between the quality of the forecast would make different models based on the polynomial regression and neural network with generalization by Bayesian regulation, using the indicators of the root of the mean square error and the coefficient of determination, in view of the results, the neural network generates a prediction more accurate and reliable than the polynomial regression.

  12. Random regression models for daily feed intake in Danish Duroc pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Mark, Thomas; Jensen, Just

    The objective of this study was to develop random regression models and estimate covariance functions for daily feed intake (DFI) in Danish Duroc pigs. A total of 476201 DFI records were available on 6542 Duroc boars between 70 to 160 days of age. The data originated from the National test station......-year-season, permanent, and animal genetic effects. The functional form was based on Legendre polynomials. A total of 64 models for random regressions were initially ranked by BIC to identify the approximate order for the Legendre polynomials using AI-REML. The parsimonious model included Legendre polynomials of 2nd...... order for genetic and permanent environmental curves and a heterogeneous residual variance, allowing the daily residual variance to change along the age trajectory due to scale effects. The parameters of the model were estimated in a Bayesian framework, using the RJMC module of the DMU package, where...

  13. Vertex models, TASEP and Grothendieck polynomials

    International Nuclear Information System (INIS)

    Motegi, Kohei; Sakai, Kazumitsu

    2013-01-01

    We examine the wavefunctions and their scalar products of a one-parameter family of integrable five-vertex models. At a special point of the parameter, the model investigated is related to an irreversible interacting stochastic particle system—the so-called totally asymmetric simple exclusion process (TASEP). By combining the quantum inverse scattering method with a matrix product representation of the wavefunctions, the on-/off-shell wavefunctions of the five-vertex models are represented as a certain determinant form. Up to some normalization factors, we find that the wavefunctions are given by Grothendieck polynomials, which are a one-parameter deformation of Schur polynomials. Introducing a dual version of the Grothendieck polynomials, and utilizing the determinant representation for the scalar products of the wavefunctions, we derive a generalized Cauchy identity satisfied by the Grothendieck polynomials and their duals. Several representation theoretical formulae for the Grothendieck polynomials are also presented. As a byproduct, the relaxation dynamics such as Green functions for the periodic TASEP are found to be described in terms of the Grothendieck polynomials. (paper)

  14. Adaptive regression for modeling nonlinear relationships

    CERN Document Server

    Knafl, George J

    2016-01-01

    This book presents methods for investigating whether relationships are linear or nonlinear and for adaptively fitting appropriate models when they are nonlinear. Data analysts will learn how to incorporate nonlinearity in one or more predictor variables into regression models for different types of outcome variables. Such nonlinear dependence is often not considered in applied research, yet nonlinear relationships are common and so need to be addressed. A standard linear analysis can produce misleading conclusions, while a nonlinear analysis can provide novel insights into data, not otherwise possible. A variety of examples of the benefits of modeling nonlinear relationships are presented throughout the book. Methods are covered using what are called fractional polynomials based on real-valued power transformations of primary predictor variables combined with model selection based on likelihood cross-validation. The book covers how to formulate and conduct such adaptive fractional polynomial modeling in the s...

  15. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  16. Leader-follower value congruence in social responsibility and ethical satisfaction: a polynomial regression analysis.

    Science.gov (United States)

    Kang, Seung-Wan; Byun, Gukdo; Park, Hun-Joon

    2014-12-01

    This paper presents empirical research into the relationship between leader-follower value congruence in social responsibility and the level of ethical satisfaction for employees in the workplace. 163 dyads were analyzed, each consisting of a team leader and an employee working at a large manufacturing company in South Korea. Following current methodological recommendations for congruence research, polynomial regression and response surface modeling methodologies were used to determine the effects of value congruence. Results indicate that leader-follower value congruence in social responsibility was positively related to the ethical satisfaction of employees. Furthermore, employees' ethical satisfaction was stronger when aligned with a leader with high social responsibility. The theoretical and practical implications are discussed.

  17. Neck curve polynomials in neck rupture model

    International Nuclear Information System (INIS)

    Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul

    2012-01-01

    The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of 280 X 90 with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.

  18. Real estate value prediction using multivariate regression models

    Science.gov (United States)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  19. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    Science.gov (United States)

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  20. Polynomial fuzzy model-based approach for underactuated surface vessels

    DEFF Research Database (Denmark)

    Khooban, Mohammad Hassan; Vafamand, Navid; Dragicevic, Tomislav

    2018-01-01

    The main goal of this study is to introduce a new polynomial fuzzy model-based structure for a class of marine systems with non-linear and polynomial dynamics. The suggested technique relies on a polynomial Takagi–Sugeno (T–S) fuzzy modelling, a polynomial dynamic parallel distributed compensation...... surface vessel (USV). Additionally, in order to overcome the USV control challenges, including the USV un-modelled dynamics, complex nonlinear dynamics, external disturbances and parameter uncertainties, the polynomial fuzzy model representation is adopted. Moreover, the USV-based control structure...... and a sum-of-squares (SOS) decomposition. The new proposed approach is a generalisation of the standard T–S fuzzy models and linear matrix inequality which indicated its effectiveness in decreasing the tracking time and increasing the efficiency of the robust tracking control problem for an underactuated...

  1. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Sudret, Bruno [ETH Zürich, Chair of Risk, Safety and Uncertainty Quantification, Stefano-Franscini-Platz 5, 8093 Zürich (Switzerland); Varsier, Nadège [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Picon, Odile [ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Wiart, Joe [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France)

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  2. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  3. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    International Nuclear Information System (INIS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-01-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  4. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input

  5. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input

  6. Estimation of genetic parameters related to eggshell strength using random regression models.

    Science.gov (United States)

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  7. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  8. Climate Impacts on Chinese Corn Yields: A Fractional Polynomial Regression Model

    NARCIS (Netherlands)

    Kooten, van G.C.; Sun, Baojing

    2012-01-01

    In this study, we examine the effect of climate on corn yields in northern China using data from ten districts in Inner Mongolia and two in Shaanxi province. A regression model with a flexible functional form is specified, with explanatory variables that include seasonal growing degree days,

  9. Polynomial algebra of discrete models in systems biology.

    Science.gov (United States)

    Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard

    2010-07-01

    An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. alanavc@vt.edu Supplementary data are available at Bioinformatics online.

  10. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  11. Multi-step polynomial regression method to model and forecast malaria incidence.

    Directory of Open Access Journals (Sweden)

    Chandrajit Chatterjee

    Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city

  12. A polynomial based model for cell fate prediction in human diseases.

    Science.gov (United States)

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  13. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR is an efficient tool for metamodelling of nonlinear dynamic models

    Directory of Open Access Journals (Sweden)

    Omholt Stig W

    2011-06-01

    Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback

  14. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    Science.gov (United States)

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for

  15. An Assessment of Polynomial Regression Techniques for the Relative Radiometric Normalization (RRN of High-Resolution Multi-Temporal Airborne Thermal Infrared (TIR Imagery

    Directory of Open Access Journals (Sweden)

    Mir Mustafizur Rahman

    2014-11-01

    Full Text Available Thermal Infrared (TIR remote sensing images of urban environments are increasingly available from airborne and satellite platforms. However, limited access to high-spatial resolution (H-res: ~1 m TIR satellite images requires the use of TIR airborne sensors for mapping large complex urban surfaces, especially at micro-scales. A critical limitation of such H-res mapping is the need to acquire a large scene composed of multiple flight lines and mosaic them together. This results in the same scene components (e.g., roads, buildings, green space and water exhibiting different temperatures in different flight lines. To mitigate these effects, linear relative radiometric normalization (RRN techniques are often applied. However, the Earth’s surface is composed of features whose thermal behaviour is characterized by complexity and non-linearity. Therefore, we hypothesize that non-linear RRN techniques should demonstrate increased radiometric agreement over similar linear techniques. To test this hypothesis, this paper evaluates four (linear and non-linear RRN techniques, including: (i histogram matching (HM; (ii pseudo-invariant feature-based polynomial regression (PIF_Poly; (iii no-change stratified random sample-based linear regression (NCSRS_Lin; and (iv no-change stratified random sample-based polynomial regression (NCSRS_Poly; two of which (ii and iv are newly proposed non-linear techniques. When applied over two adjacent flight lines (~70 km2 of TABI-1800 airborne data, visual and statistical results show that both new non-linear techniques improved radiometric agreement over the previously evaluated linear techniques, with the new fully-automated method, NCSRS-based polynomial regression, providing the highest improvement in radiometric agreement between the master and the slave images, at ~56%. This is ~5% higher than the best previously evaluated linear technique (NCSRS-based linear regression.

  16. Minimizing the effects of multicollinearity in the polynomial regression of age relationships and sex differences in serum levels of pregnenolone sulfate in healthy subjects.

    Science.gov (United States)

    Meloun, Milan; Hill, Martin; Vceláková-Havlíková, Helena

    2009-01-01

    Pregnenolone sulfate (PregS) is known as a steroid conjugate positively modulating N-methyl-D-aspartate receptors on neuronal membranes. These receptors are responsible for permeability of calcium channels and activation of neuronal function. Neuroactivating effect of PregS is also exerted via non-competitive negative modulation of GABA(A) receptors regulating the chloride influx. Recently, a penetrability of blood-brain barrier for PregS was found in rat, but some experiments in agreement with this finding were reported even earlier. It is known that circulating levels of PregS in human are relatively high depending primarily on age and adrenal activity. Concerning the neuromodulating effect of PregS, we recently evaluated age relationships of PregS in both sexes using polynomial regression models known to bring about the problems of multicollinearity, i.e., strong correlations among independent variables. Several criteria for the selection of suitable bias are demonstrated. Biased estimators based on the generalized principal component regression (GPCR) method avoiding multicollinearity problems are described. Significant differences were found between men and women in the course of the age dependence of PregS. In women, a significant maximum was found around the 30th year followed by a rapid decline, while the maximum in men was achieved almost 10 years earlier and changes were minor up to the 60th year. The investigation of gender differences and age dependencies in PregS could be of interest given its well-known neurostimulating effect, relatively high serum concentration, and the probable partial permeability of the blood-brain barrier for the steroid conjugate. GPCR in combination with the MEP (mean quadric error of prediction) criterion is extremely useful and appealing for constructing biased models. It can also be used for achieving such estimates with regard to keeping the model course corresponding to the data trend, especially in polynomial type

  17. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    Science.gov (United States)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  18. Improved Polynomial Fuzzy Modeling and Controller with Stability Analysis for Nonlinear Dynamical Systems

    Directory of Open Access Journals (Sweden)

    Hamed Kharrati

    2012-01-01

    Full Text Available This study presents an improved model and controller for nonlinear plants using polynomial fuzzy model-based (FMB systems. To minimize mismatch between the polynomial fuzzy model and nonlinear plant, the suitable parameters of membership functions are determined in a systematic way. Defining an appropriate fitness function and utilizing Taylor series expansion, a genetic algorithm (GA is used to form the shape of membership functions in polynomial forms, which are afterwards used in fuzzy modeling. To validate the model, a controller based on proposed polynomial fuzzy systems is designed and then applied to both original nonlinear plant and fuzzy model for comparison. Additionally, stability analysis for the proposed polynomial FMB control system is investigated employing Lyapunov theory and a sum of squares (SOS approach. Moreover, the form of the membership functions is considered in stability analysis. The SOS-based stability conditions are attained using SOSTOOLS. Simulation results are also given to demonstrate the effectiveness of the proposed method.

  19. New realisation of Preisach model using adaptive polynomial approximation

    Science.gov (United States)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  20. Improved Polynomial Fuzzy Modeling and Controller with Stability Analysis for Nonlinear Dynamical Systems

    OpenAIRE

    Hamed Kharrati; Sohrab Khanmohammadi; Witold Pedrycz; Ghasem Alizadeh

    2012-01-01

    This study presents an improved model and controller for nonlinear plants using polynomial fuzzy model-based (FMB) systems. To minimize mismatch between the polynomial fuzzy model and nonlinear plant, the suitable parameters of membership functions are determined in a systematic way. Defining an appropriate fitness function and utilizing Taylor series expansion, a genetic algorithm (GA) is used to form the shape of membership functions in polynomial forms, which are afterwards used in fuzzy m...

  1. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  2. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  3. Exact solution of Chern-Simons-matter matrix models with characteristic/orthogonal polynomials

    International Nuclear Information System (INIS)

    Tierz, Miguel

    2016-01-01

    We solve for finite N the matrix model of supersymmetric U(N) Chern-Simons theory coupled to N f fundamental and N f anti-fundamental chiral multiplets of R-charge 1/2 and of mass m, by identifying it with an average of inverse characteristic polynomials in a Stieltjes-Wigert ensemble. This requires the computation of the Cauchy transform of the Stieltjes-Wigert polynomials, which we carry out, finding a relationship with Mordell integrals, and hence with previous analytical results on the matrix model. The semiclassical limit of the model is expressed, for arbitrary N f , in terms of a single Hermite polynomial. This result also holds for more general matter content, involving matrix models with double-sine functions.

  4. Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Claus Vogl

    2014-11-01

    Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.

  5. Inferring genetic parameters of lactation in Tropical Milking Criollo cattle with random regression test-day models.

    Science.gov (United States)

    Santellano-Estrada, E; Becerril-Pérez, C M; de Alba, J; Chang, Y M; Gianola, D; Torres-Hernández, G; Ramírez-Valverde, R

    2008-11-01

    This study inferred genetic and permanent environmental variation of milk yield in Tropical Milking Criollo cattle and compared 5 random regression test-day models using Wilmink's function and Legendre polynomials. Data consisted of 15,377 test-day records from 467 Tropical Milking Criollo cows that calved between 1974 and 2006 in the tropical lowlands of the Gulf Coast of Mexico and in southern Nicaragua. Estimated heritabilities of test-day milk yields ranged from 0.18 to 0.45, and repeatabilities ranged from 0.35 to 0.68 for the period spanning from 6 to 400 d in milk. Genetic correlation between days in milk 10 and 400 was around 0.50 but greater than 0.90 for most pairs of test days. The model that used first-order Legendre polynomials for additive genetic effects and second-order Legendre polynomials for permanent environmental effects gave the smallest residual variance and was also favored by the Akaike information criterion and likelihood ratio tests.

  6. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium); Raisee, Mehrdad [School of Mechanical Engineering, College of Engineering, University of Tehran, P.O. Box: 11155-4563, Tehran (Iran, Islamic Republic of); Ghorbaniasl, Ghader; Contino, Francesco; Lacor, Chris [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium)

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selection criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.

  7. Zernike polynomial based Rayleigh-Ritz model of a piezoelectric unimorph deformable mirror

    CSIR Research Space (South Africa)

    Long, CS

    2012-04-01

    Full Text Available , are routinely and conveniently described using Zernike polynomials. A Rayleigh-Ritz structural model, which uses Zernike polynomials directly to describe the displacements, is proposed in this paper. The proposed formulation produces a numerically inexpensive...

  8. Stability Analysis of Positive Polynomial Fuzzy-Model-Based Control Systems with Time Delay under Imperfect Premise Matching

    OpenAIRE

    Li, Xiaomiao; Lam, Hak Keung; Song, Ge; Liu, Fucai

    2017-01-01

    This paper deals with the stability and positivity analysis of polynomial-fuzzy-model-based ({PFMB}) control systems with time delay, which is formed by a polynomial fuzzy model and a polynomial fuzzy controller connected in a closed loop, under imperfect premise matching. To improve the design and realization flexibility, the polynomial fuzzy model and the polynomial fuzzy controller are allowed to have their own set of premise membership functions. A sum-of-squares (SOS)-based stability ana...

  9. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  10. Logistic regression models for polymorphic and antagonistic pleiotropic gene action on human aging and longevity

    DEFF Research Database (Denmark)

    Tan, Qihua; Bathum, L; Christiansen, L

    2003-01-01

    In this paper, we apply logistic regression models to measure genetic association with human survival for highly polymorphic and pleiotropic genes. By modelling genotype frequency as a function of age, we introduce a logistic regression model with polytomous responses to handle the polymorphic...... situation. Genotype and allele-based parameterization can be used to investigate the modes of gene action and to reduce the number of parameters, so that the power is increased while the amount of multiple testing minimized. A binomial logistic regression model with fractional polynomials is used to capture...... the age-dependent or antagonistic pleiotropic effects. The models are applied to HFE genotype data to assess the effects on human longevity by different alleles and to detect if an age-dependent effect exists. Application has shown that these methods can serve as useful tools in searching for important...

  11. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  12. Better polynomials for GNFS

    OpenAIRE

    Bai , Shi; Bouvier , Cyril; Kruppa , Alexander; Zimmermann , Paul

    2016-01-01

    International audience; The general number field sieve (GNFS) is the most efficient algo-rithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the selected polynomials can be modelled in terms of size and root properties. We propose a new kind of polynomials for GNFS: with a new degree of freedom, we further improve the size property. We demonstrate the efficiency of our algorithm by exhibiting a better polynomial tha...

  13. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  14. Genetic analysis of body weights of individually fed beef bulls in South Africa using random regression models.

    Science.gov (United States)

    Selapa, N W; Nephawe, K A; Maiwashe, A; Norris, D

    2012-02-08

    The aim of this study was to estimate genetic parameters for body weights of individually fed beef bulls measured at centralized testing stations in South Africa using random regression models. Weekly body weights of Bonsmara bulls (N = 2919) tested between 1999 and 2003 were available for the analyses. The model included a fixed regression of the body weights on fourth-order orthogonal Legendre polynomials of the actual days on test (7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, and 84) for starting age and contemporary group effects. Random regressions on fourth-order orthogonal Legendre polynomials of the actual days on test were included for additive genetic effects and additional uncorrelated random effects of the weaning-herd-year and the permanent environment of the animal. Residual effects were assumed to be independently distributed with heterogeneous variance for each test day. Variance ratios for additive genetic, permanent environment and weaning-herd-year for weekly body weights at different test days ranged from 0.26 to 0.29, 0.37 to 0.44 and 0.26 to 0.34, respectively. The weaning-herd-year was found to have a significant effect on the variation of body weights of bulls despite a 28-day adjustment period. Genetic correlations amongst body weights at different test days were high, ranging from 0.89 to 1.00. Heritability estimates were comparable to literature using multivariate models. Therefore, random regression model could be applied in the genetic evaluation of body weight of individually fed beef bulls in South Africa.

  15. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    Science.gov (United States)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  16. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  17. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators

    International Nuclear Information System (INIS)

    Alvarez R, J.T.; Morales P, R.

    1992-06-01

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ( 90 Sr/ 90 Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)

  18. Large N Penner matrix model and a novel asymptotic formula for the generalized Laguerre polynomials

    International Nuclear Information System (INIS)

    Deo, N

    2003-01-01

    The Gaussian Penner matrix model is re-examined in the light of the results which have been found in double-well matrix models. The orthogonal polynomials for the Gaussian Penner model are shown to be the generalized Laguerre polynomials L (α) n (x) with α and x depending on N, the size of the matrix. An asymptotic formula for the orthogonal polynomials is derived following closely the orthogonal polynomial method of Deo (1997 Nucl. Phys. B 504 609). The universality found in the double-well matrix model is extended to include non-polynomial potentials. An asymptotic formula is also found for the Laguerre polynomial using the saddle-point method by rescaling α and x with N. Combining these results a novel asymptotic formula is found for the generalized Laguerre polynomials (different from that given in Szego's book) in a different asymptotic regime. This may have applications in mathematical and physical problems in the future. The density-density correlators are derived and are the same as those found for the double-well matrix models. These correlators in the smoothed large N limit are sensitive to odd and even N where N is the size of the matrix. These results for the two-point density-density correlation function may be useful in finding eigenvalue effects in experiments in mesoscopic systems or small metallic grains. There may be applications to string theory as well as the tunnelling of an eigenvalue from one valley to the other being an important quantity there

  19. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  20. Development of a polynomial nodal model to the multigroup transport equation in one dimension

    International Nuclear Information System (INIS)

    Feiz, M.

    1986-01-01

    A polynomial nodal model that uses Legendre polynomial expansions was developed for the multigroup transport equation in one dimension. The development depends upon the least-squares minimization of the residuals using the approximate functions over the node. Analytical expressions were developed for the polynomial coefficients. The odd moments of the angular neutron flux over the half ranges were used at the internal interfaces, and the Marshak boundary condition was used at the external boundaries. Sample problems with fine-mesh finite-difference solutions of the diffusion and transport equations were used for comparison with the model

  1. An Explicit Formula for Symmetric Polynomials Related to the Eigenfunctions of Calogero-Sutherland Models

    Directory of Open Access Journals (Sweden)

    Martin Hallnäs

    2007-03-01

    Full Text Available We review a recent construction of an explicit analytic series representation for symmetric polynomials which up to a groundstate factor are eigenfunctions of Calogero-Sutherland type models. We also indicate a generalisation of this result to polynomials which give the eigenfunctions of so-called 'deformed' Calogero-Sutherland type models.

  2. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    Science.gov (United States)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  3. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  4. Superiority of legendre polynomials to Chebyshev polynomial in ...

    African Journals Online (AJOL)

    In this paper, we proved the superiority of Legendre polynomial to Chebyshev polynomial in solving first order ordinary differential equation with rational coefficient. We generated shifted polynomial of Chebyshev, Legendre and Canonical polynomials which deal with solving differential equation by first choosing Chebyshev ...

  5. Discrepancies Between Perceptions of the Parent-Adolescent Relationship and Early Adolescent Depressive Symptoms: An Illustration of Polynomial Regression Analysis.

    Science.gov (United States)

    Nelemans, S A; Branje, S J T; Hale, W W; Goossens, L; Koot, H M; Oldehinkel, A J; Meeus, W H J

    2016-10-01

    Adolescence is a critical period for the development of depressive symptoms. Lower quality of the parent-adolescent relationship has been consistently associated with higher adolescent depressive symptoms, but discrepancies in perceptions of parents and adolescents regarding the quality of their relationship may be particularly important to consider. In the present study, we therefore examined how discrepancies in parents' and adolescents' perceptions of the parent-adolescent relationship were associated with early adolescent depressive symptoms, both concurrently and longitudinally over a 1-year period. Our sample consisted of 497 Dutch adolescents (57 % boys, M age = 13.03 years), residing in the western and central regions of the Netherlands, and their mothers and fathers, who all completed several questionnaires on two occasions with a 1-year interval. Adolescents reported on depressive symptoms and all informants reported on levels of negative interaction in the parent-adolescent relationship. Results from polynomial regression analyses including interaction terms between informants' perceptions, which have recently been proposed as more valid tests of hypotheses involving informant discrepancies than difference scores, suggested the highest adolescent depressive symptoms when both the mother and the adolescent reported high negative interaction, and when the adolescent reported high but the father reported low negative interaction. This pattern of findings underscores the need for a more sophisticated methodology such as polynomial regression analysis including tests of moderation, rather than the use of difference scores, which can adequately address both congruence and discrepancies in perceptions of adolescents and mothers/fathers of the parent-adolescent relationship in detail. Such an analysis can contribute to a more comprehensive understanding of risk factors for early adolescent depressive symptoms.

  6. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    Science.gov (United States)

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  7. Genetic analysis of partial egg production records in Japanese quail using random regression models.

    Science.gov (United States)

    Abou Khadiga, G; Mahmoud, B Y F; Farahat, G S; Emam, A M; El-Full, E A

    2017-08-01

    The main objectives of this study were to detect the most appropriate random regression model (RRM) to fit the data of monthly egg production in 2 lines (selected and control) of Japanese quail and to test the consistency of different criteria of model choice. Data from 1,200 female Japanese quails for the first 5 months of egg production from 4 consecutive generations of an egg line selected for egg production in the first month (EP1) was analyzed. Eight RRMs with different orders of Legendre polynomials were compared to determine the proper model for analysis. All criteria of model choice suggested that the adequate model included the second-order Legendre polynomials for fixed effects, and the third-order for additive genetic effects and permanent environmental effects. Predictive ability of the best model was the highest among all models (ρ = 0.987). According to the best model fitted to the data, estimates of heritability were relatively low to moderate (0.10 to 0.17) showed a descending pattern from the first to the fifth month of production. A similar pattern was observed for permanent environmental effects with greater estimates in the first (0.36) and second (0.23) months of production than heritability estimates. Genetic correlations between separate production periods were higher (0.18 to 0.93) than their phenotypic counterparts (0.15 to 0.87). The superiority of the selected line over the control was observed through significant (P egg production in earlier ages (first and second months) than later ones. A methodology based on random regression animal models can be recommended for genetic evaluation of egg production in Japanese quail. © 2017 Poultry Science Association Inc.

  8. Bayesian median regression for temporal gene expression data

    Science.gov (United States)

    Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.

    2007-09-01

    Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.

  9. Predicting Antitumor Activity of Peptides by Consensus of Regression Models Trained on a Small Data Sample

    Directory of Open Access Journals (Sweden)

    Ivanka Jerić

    2011-11-01

    Full Text Available Predicting antitumor activity of compounds using regression models trained on a small number of compounds with measured biological activity is an ill-posed inverse problem. Yet, it occurs very often within the academic community. To counteract, up to some extent, overfitting problems caused by a small training data, we propose to use consensus of six regression models for prediction of biological activity of virtual library of compounds. The QSAR descriptors of 22 compounds related to the opioid growth factor (OGF, Tyr-Gly-Gly-Phe-Met with known antitumor activity were used to train regression models: the feed-forward artificial neural network, the k-nearest neighbor, sparseness constrained linear regression, the linear and nonlinear (with polynomial and Gaussian kernel support vector machine. Regression models were applied on a virtual library of 429 compounds that resulted in six lists with candidate compounds ranked by predicted antitumor activity. The highly ranked candidate compounds were synthesized, characterized and tested for an antiproliferative activity. Some of prepared peptides showed more pronounced activity compared with the native OGF; however, they were less active than highly ranked compounds selected previously by the radial basis function support vector machine (RBF SVM regression model. The ill-posedness of the related inverse problem causes unstable behavior of trained regression models on test data. These results point to high complexity of prediction based on the regression models trained on a small data sample.

  10. Polynomial model inversion control: numerical tests and applications

    OpenAIRE

    Novara, Carlo

    2015-01-01

    A novel control design approach for general nonlinear systems is described in this paper. The approach is based on the identification of a polynomial model of the system to control and on the on-line inversion of this model. Extensive simulations are carried out to test the numerical efficiency of the approach. Numerical examples of applicative interest are presented, concerned with control of the Duffing oscillator, control of a robot manipulator and insulin regulation in a type 1 diabetic p...

  11. Optimization over polynomials : Selected topics

    NARCIS (Netherlands)

    Laurent, M.; Jang, Sun Young; Kim, Young Rock; Lee, Dae-Woong; Yie, Ikkwon

    2014-01-01

    Minimizing a polynomial function over a region defined by polynomial inequalities models broad classes of hard problems from combinatorics, geometry and optimization. New algorithmic approaches have emerged recently for computing the global minimum, by combining tools from real algebra (sums of

  12. Evaluation of Induced Settlements of Piled Rafts in the Coupled Static-Dynamic Loads Using Neural Networks and Evolutionary Polynomial Regression

    Directory of Open Access Journals (Sweden)

    Ali Ghorbani

    2017-01-01

    Full Text Available Coupled Piled Raft Foundations (CPRFs are broadly applied to share heavy loads of superstructures between piles and rafts and reduce total and differential settlements. Settlements induced by static/coupled static-dynamic loads are one of the main concerns of engineers in designing CPRFs. Evaluation of induced settlements of CPRFs has been commonly carried out using three-dimensional finite element/finite difference modeling or through expensive real-scale/prototype model tests. Since the analyses, especially in the case of coupled static-dynamic loads, are not simply conducted, this paper presents two practical methods to gain the values of settlement. First, different nonlinear finite difference models under different static and coupled static-dynamic loads are developed to calculate exerted settlements. Analyses are performed with respect to different axial loads and pile’s configurations, numbers, lengths, diameters, and spacing for both loading cases. Based on the results of well-validated three-dimensional finite difference modeling, artificial neural networks and evolutionary polynomial regressions are then applied and introduced as capable methods to accurately present both static and coupled static-dynamic settlements. Also, using a sensitivity analysis based on Cosine Amplitude Method, axial load is introduced as the most influential parameter, while the ratio l/d is reported as the least effective parameter on the settlements of CPRFs.

  13. The application of polynomial chaos methods to a point kinetics model of MIPR: An Aqueous Homogeneous Reactor

    International Nuclear Information System (INIS)

    Cooling, C.M.; Williams, M.M.R.; Nygaard, E.T.; Eaton, M.D.

    2013-01-01

    Highlights: • A point kinetics model for the Medical Isotope Production Reactor is formulated. • Reactivity insertions are simulated using this model. • Polynomial chaos is used to simulate uncertainty in reactor parameters. • The computational efficiency of polynomial chaos is compared to that of Monte Carlo. -- Abstract: This paper models a conceptual Medical Isotope Production Reactor (MIPR) using a point kinetics model which is used to explore power excursions in the event of a reactivity insertion. The effect of uncertainty of key parameters is modelled using intrusive polynomial chaos. It is found that the system is stable against reactivity insertions and power excursions are all bounded and tend towards a new equilibrium state due to the negative feedbacks inherent in Aqueous Homogeneous Reactors (AHRs). The Polynomial Chaos Expansion (PCE) method is found to be much more computationally efficient than that of Monte Carlo simulation in this application

  14. A generalized multivariate regression model for modelling ocean wave heights

    Science.gov (United States)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  15. Orthogonal polynomials

    CERN Document Server

    Freud, Géza

    1971-01-01

    Orthogonal Polynomials contains an up-to-date survey of the general theory of orthogonal polynomials. It deals with the problem of polynomials and reveals that the sequence of these polynomials forms an orthogonal system with respect to a non-negative m-distribution defined on the real numerical axis. Comprised of five chapters, the book begins with the fundamental properties of orthogonal polynomials. After discussing the momentum problem, it then explains the quadrature procedure, the convergence theory, and G. Szegő's theory. This book is useful for those who intend to use it as referenc

  16. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle

    Directory of Open Access Journals (Sweden)

    C. I. Cho

    2016-05-01

    Full Text Available The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs, and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK, fat yield (FAT, protein yield (PROT, and solids-not-fat yield (SNF. The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP of the third to fifth order (L3–L5, fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order. The residual variances in the models were either homogeneous (HOM or heterogeneous (15 classes, HET15; 60 classes, HET60. A total of nine models (3 orders of polynomials×3 types of residual variance including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC and/or Schwarz Bayesian information criteria (BIC statistics to identify the model(s of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF and L4-HET15 (FAT, which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first

  17. Polynomial constitutive model for shape memory and pseudo elasticity

    International Nuclear Information System (INIS)

    Savi, M.A.; Kouzak, Z.

    1995-01-01

    This paper reports an one-dimensional phenomenological constitutive model for shape memory and pseudo elasticity using a polynomial expression for the free energy which is based on the classical Devonshire theory. This study identifies the main characteristics of the classical theory and introduces a simple modification to obtain better results. (author). 9 refs., 6 figs

  18. On weighted and locally polynomial directional quantile regression

    Czech Academy of Sciences Publication Activity Database

    Boček, Pavel; Šiman, Miroslav

    2017-01-01

    Roč. 32, č. 3 (2017), s. 929-946 ISSN 0943-4062 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : Quantile regression * Nonparametric regression * Nonparametric regression Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.434, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0458380.pdf

  19. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  20. A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem

    KAUST Repository

    Delaigle, Aurore

    2009-03-01

    Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.

  1. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay Krasimirov

    2018-01-01

    of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces......Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating......-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertainty models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation...

  2. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  3. Matrix product formula for Macdonald polynomials

    Science.gov (United States)

    Cantini, Luigi; de Gier, Jan; Wheeler, Michael

    2015-09-01

    We derive a matrix product formula for symmetric Macdonald polynomials. Our results are obtained by constructing polynomial solutions of deformed Knizhnik-Zamolodchikov equations, which arise by considering representations of the Zamolodchikov-Faddeev and Yang-Baxter algebras in terms of t-deformed bosonic operators. These solutions are generalized probabilities for particle configurations of the multi-species asymmetric exclusion process, and form a basis of the ring of polynomials in n variables whose elements are indexed by compositions. For weakly increasing compositions (anti-dominant weights), these basis elements coincide with non-symmetric Macdonald polynomials. Our formulas imply a natural combinatorial interpretation in terms of solvable lattice models. They also imply that normalizations of stationary states of multi-species exclusion processes are obtained as Macdonald polynomials at q = 1.

  4. Matrix product formula for Macdonald polynomials

    International Nuclear Information System (INIS)

    Cantini, Luigi; Gier, Jan de; Michael Wheeler

    2015-01-01

    We derive a matrix product formula for symmetric Macdonald polynomials. Our results are obtained by constructing polynomial solutions of deformed Knizhnik–Zamolodchikov equations, which arise by considering representations of the Zamolodchikov–Faddeev and Yang–Baxter algebras in terms of t-deformed bosonic operators. These solutions are generalized probabilities for particle configurations of the multi-species asymmetric exclusion process, and form a basis of the ring of polynomials in n variables whose elements are indexed by compositions. For weakly increasing compositions (anti-dominant weights), these basis elements coincide with non-symmetric Macdonald polynomials. Our formulas imply a natural combinatorial interpretation in terms of solvable lattice models. They also imply that normalizations of stationary states of multi-species exclusion processes are obtained as Macdonald polynomials at q = 1. (paper)

  5. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    Science.gov (United States)

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.

  6. Constructing general partial differential equations using polynomial and neural networks.

    Science.gov (United States)

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Gaussian Processes and Polynomial Chaos Expansion for Regression Problem: Linkage via the RKHS and Comparison via the KL Divergence

    Directory of Open Access Journals (Sweden)

    Liang Yan

    2018-03-01

    Full Text Available In this paper, we examine two widely-used approaches, the polynomial chaos expansion (PCE and Gaussian process (GP regression, for the development of surrogate models. The theoretical differences between the PCE and GP approximations are discussed. A state-of-the-art PCE approach is constructed based on high precision quadrature points; however, the need for truncation may result in potential precision loss; the GP approach performs well on small datasets and allows a fine and precise trade-off between fitting the data and smoothing, but its overall performance depends largely on the training dataset. The reproducing kernel Hilbert space (RKHS and Mercer’s theorem are introduced to form a linkage between the two methods. The theorem has proven that the two surrogates can be embedded in two isomorphic RKHS, by which we propose a novel method named Gaussian process on polynomial chaos basis (GPCB that incorporates the PCE and GP. A theoretical comparison is made between the PCE and GPCB with the help of the Kullback–Leibler divergence. We present that the GPCB is as stable and accurate as the PCE method. Furthermore, the GPCB is a one-step Bayesian method that chooses the best subset of RKHS in which the true function should lie, while the PCE method requires an adaptive procedure. Simulations of 1D and 2D benchmark functions show that GPCB outperforms both the PCE and classical GP methods. In order to solve high dimensional problems, a random sample scheme with a constructive design (i.e., tensor product of quadrature points is proposed to generate a valid training dataset for the GPCB method. This approach utilizes the nature of the high numerical accuracy underlying the quadrature points while ensuring the computational feasibility. Finally, the experimental results show that our sample strategy has a higher accuracy than classical experimental designs; meanwhile, it is suitable for solving high dimensional problems.

  8. Numerical Simulation of Polynomial-Speed Convergence Phenomenon

    Science.gov (United States)

    Li, Yao; Xu, Hui

    2017-11-01

    We provide a hybrid method that captures the polynomial speed of convergence and polynomial speed of mixing for Markov processes. The hybrid method that we introduce is based on the coupling technique and renewal theory. We propose to replace some estimates in classical results about the ergodicity of Markov processes by numerical simulations when the corresponding analytical proof is difficult. After that, all remaining conclusions can be derived from rigorous analysis. Then we apply our results to seek numerical justification for the ergodicity of two 1D microscopic heat conduction models. The mixing rate of these two models are expected to be polynomial but very difficult to prove. In both examples, our numerical results match the expected polynomial mixing rate well.

  9. Applications of polynomial optimization in financial risk investment

    Science.gov (United States)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  10. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  11. Polynomial fuzzy model-based control systems stability analysis and control synthesis using membership function dependent techniques

    CERN Document Server

    Lam, Hak-Keung

    2016-01-01

    This book presents recent research on the stability analysis of polynomial-fuzzy-model-based control systems where the concept of partially/imperfectly matched premises and membership-function dependent analysis are considered. The membership-function-dependent analysis offers a new research direction for fuzzy-model-based control systems by taking into account the characteristic and information of the membership functions in the stability analysis. The book presents on a research level the most recent and advanced research results, promotes the research of polynomial-fuzzy-model-based control systems, and provides theoretical support and point a research direction to postgraduate students and fellow researchers. Each chapter provides numerical examples to verify the analysis results, demonstrate the effectiveness of the proposed polynomial fuzzy control schemes, and explain the design procedure. The book is comprehensively written enclosing detailed derivation steps and mathematical derivations also for read...

  12. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  13. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    Science.gov (United States)

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  14. Tracking time-varying parameters with local regression

    DEFF Research Database (Denmark)

    Joensen, Alfred Karsten; Nielsen, Henrik Aalborg; Nielsen, Torben Skov

    2000-01-01

    This paper shows that the recursive least-squares (RLS) algorithm with forgetting factor is a special case of a varying-coe\\$cient model, and a model which can easily be estimated via simple local regression. This observation allows us to formulate a new method which retains the RLS algorithm, bu......, but extends the algorithm by including polynomial approximations. Simulation results are provided, which indicates that this new method is superior to the classical RLS method, if the parameter variations are smooth....

  15. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    Science.gov (United States)

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second

  16. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    Science.gov (United States)

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  17. Many-body orthogonal polynomial systems

    International Nuclear Information System (INIS)

    Witte, N.S.

    1997-03-01

    The fundamental methods employed in the moment problem, involving orthogonal polynomial systems, the Lanczos algorithm, continued fraction analysis and Pade approximants has been combined with a cumulant approach and applied to the extensive many-body problem in physics. This has yielded many new exact results for many-body systems in the thermodynamic limit - for the ground state energy, for excited state gaps, for arbitrary ground state avenges - and are of a nonperturbative nature. These results flow from a confluence property of the three-term recurrence coefficients arising and define a general class of many-body orthogonal polynomials. These theorems constitute an analytical solution to the Lanczos algorithm in that they are expressed in terms of the three-term recurrence coefficients α and β. These results can also be applied approximately for non-solvable models in the form of an expansion, in a descending series of the system size. The zeroth order order this expansion is just the manifestation of the central limit theorem in which a Gaussian measure and hermite polynomials arise. The first order represents the first non-trivial order, in which classical distribution functions like the binomial distributions arise and the associated class of orthogonal polynomials are Meixner polynomials. Amongst examples of systems which have infinite order in the expansion are q-orthogonal polynomials where q depends on the system size in a particular way. (author)

  18. Topological quantum information, virtual Jones polynomials and Khovanov homology

    International Nuclear Information System (INIS)

    Kauffman, Louis H

    2011-01-01

    In this paper, we give a quantum statistical interpretation of the bracket polynomial state sum 〈K〉, the Jones polynomial V K (t) and virtual knot theory versions of the Jones polynomial, including the arrow polynomial. We use these quantum mechanical interpretations to give new quantum algorithms for these Jones polynomials. In those cases where the Khovanov homology is defined, the Hilbert space C(K) of our model is isomorphic with the chain complex for Khovanov homology with coefficients in the complex numbers. There is a natural unitary transformation U:C(K) → C(K) such that 〈K〉 = Trace(U), where 〈K〉 denotes the evaluation of the state sum model for the corresponding polynomial. We show that for the Khovanov boundary operator ∂:C(K) → C(K), we have the relationship ∂U + U∂ = 0. Consequently, the operator U acts on the Khovanov homology, and we obtain a direct relationship between the Khovanov homology and this quantum algorithm for the Jones polynomial. (paper)

  19. Einstein’s gravity from a polynomial affine model

    Science.gov (United States)

    Castillo-Felisola, Oscar; Skirzewski, Aureliano

    2018-03-01

    We show that the effective field equations for a recently formulated polynomial affine model of gravity, in the sector of a torsion-free connection, accept general Einstein manifolds—with or without cosmological constant—as solutions. Moreover, the effective field equations are partially those obtained from a gravitational Yang–Mills theory known as Stephenson–Kilmister–Yang theory. Additionally, we find a generalization of a minimally coupled massless scalar field in General Relativity within a ‘minimally’ coupled scalar field in this affine model. Finally, we present a brief (perturbative) analysis of the propagators of the gravitational theory, and count the degrees of freedom. For completeness, we prove that a Birkhoff-like theorem is valid for the analyzed sector.

  20. Influence of regression model and incremental test protocol on the relationship between lactate threshold using the maximal-deviation method and performance in female runners.

    Science.gov (United States)

    Machado, Fabiana Andrade; Nakamura, Fábio Yuzo; Moraes, Solange Marta Franzói De

    2012-01-01

    This study examined the influence of the regression model and initial intensity of an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and the endurance performance. Sixteen non-competitive, recreational female runners performed a discontinuous incremental treadmill test. The initial speed was set at 7 km · h⁻¹, and increased every 3 min by 1 km · h⁻¹ with a 30-s rest between the stages used for earlobe capillary blood sample collection. Lactate-speed data were fitted by an exponential-plus-constant and a third-order polynomial equation. The lactate threshold was determined for both regression equations, using all the coordinates, excluding the first and excluding the first and second initial points. Mean speed of a 10-km road race was the performance index (3.04 ± 0.22 m · s⁻¹). The exponentially-derived lactate threshold had a higher correlation (0.98 ≤ r ≤ 0.99) and smaller standard error of estimate (SEE) (0.04 ≤ SEE ≤ 0.05 m · s⁻¹) with performance than the polynomially-derived equivalent (0.83 ≤ r ≤ 0.89; 0.10 ≤ SEE ≤ 0.13 m · s⁻¹). The exponential lactate threshold was greater than the polynomial equivalent (P performance index that is independent of the initial intensity of the incremental test and better than the polynomial equivalent.

  1. Irreducible multivariate polynomials obtained from polynomials in ...

    Indian Academy of Sciences (India)

    Hall, 1409 W. Green Street, Urbana, IL 61801, USA. E-mail: Nicolae. ... Theorem A. If we write an irreducible polynomial f ∈ K[X] as a sum of polynomials a0,..., an ..... This shows us that deg ai = (n − i) deg f2 for each i = 0,..., n, so min k>0.

  2. Complex models of nodal nuclear data

    International Nuclear Information System (INIS)

    Dufek, Jan

    2011-01-01

    During the core simulations, nuclear data are required at various nodal thermal-hydraulic and fuel burnup conditions. The nodal data are also partially affected by thermal-hydraulic and fuel burnup conditions in surrounding nodes as these change the neutron energy spectrum in the node. Therefore, the nodal data are functions of many parameters (state variables), and the more state variables are considered by the nodal data models the more accurate and flexible the models get. The existing table and polynomial regression models, however, cannot reflect the data dependences on many state variables. As for the table models, the number of mesh points (and necessary lattice calculations) grows exponentially with the number of variables. As for the polynomial regression models, the number of possible multivariate polynomials exceeds the limits of existing selection algorithms that should identify a few dozens of the most important polynomials. Also, the standard scheme of lattice calculations is not convenient for modelling the data dependences on various burnup conditions since it performs only a single or few burnup calculations at fixed nominal conditions. We suggest a new efficient algorithm for selecting the most important multivariate polynomials for the polynomial regression models so that dependences on many state variables can be considered. We also present a new scheme for lattice calculations where a large number of burnup histories are accomplished at varied nodal conditions. The number of lattice calculations being performed and the number of polynomials being analysed are controlled and minimised while building the nodal data models of a required accuracy. (author)

  3. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  4. Branched polynomial covering maps

    DEFF Research Database (Denmark)

    Hansen, Vagn Lundsgaard

    2002-01-01

    A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch ...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere. (C) 2001 Elsevier Science B.V. All rights reserved.......A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...

  5. Analysis of the Level-Release Polynomial from a Hydroelectric Plant

    Directory of Open Access Journals (Sweden)

    Ieda Hidalgo

    2012-02-01

    Full Text Available The mathematic representation of the tailrace elevation as function of the water release can be modified, for example, by the geomorphologic impact of large floods. The level-release polynomial from a hydroelectric plant is important information to computational models used for optimization and simulation of the power generation systems operation. They depend on data quality to provide reliable results. Therefore, this paper presents a method for adjusting of the tailrace polynomial based on operation data recorded by the plant’s owner or company. The proposed method uses a non-linear regression tool, such as Trendline in Excel. A case study has been applied to the data from a large Brazilian hydroelectric plant whose operation is under the coordination of the Electric System ational Operator. The benefits of the data correction are analyzed using a simulation model for the hydroelectric plants operation. This simulator is used to reproduce the past operation of the plant, first with official data and second with adjusted data. The results show significant improvements in terms of quality of the data, contributing to bring the real and simulated operation closer.

  6. Technique for image interpolation using polynomial transforms

    NARCIS (Netherlands)

    Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.

    1993-01-01

    We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

  7. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    Science.gov (United States)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  8. Polynomial selection in number field sieve for integer factorization

    Directory of Open Access Journals (Sweden)

    Gireesh Pandey

    2016-09-01

    Full Text Available The general number field sieve (GNFS is the fastest algorithm for factoring large composite integers which is made up by two prime numbers. Polynomial selection is an important step of GNFS. The asymptotic runtime depends on choice of good polynomial pairs. In this paper, we present polynomial selection algorithm that will be modelled with size and root properties. The correlations between polynomial coefficient and number of relations have been explored with experimental findings.

  9. A new class of generalized polynomials associated with Hermite and Bernoulli polynomials

    Directory of Open Access Journals (Sweden)

    M. A. Pathan

    2015-05-01

    Full Text Available In this paper, we introduce a new class of generalized  polynomials associated with  the modified Milne-Thomson's polynomials Φ_{n}^{(α}(x,ν of degree n and order α introduced by  Derre and Simsek.The concepts of Bernoulli numbers B_n, Bernoulli polynomials  B_n(x, generalized Bernoulli numbers B_n(a,b, generalized Bernoulli polynomials  B_n(x;a,b,c of Luo et al, Hermite-Bernoulli polynomials  {_HB}_n(x,y of Dattoli et al and {_HB}_n^{(α} (x,y of Pathan  are generalized to the one   {_HB}_n^{(α}(x,y,a,b,c which is called  the generalized  polynomial depending on three positive real parameters. Numerous properties of these polynomials and some relationships between B_n, B_n(x, B_n(a,b, B_n(x;a,b,c and {}_HB_n^{(α}(x,y;a,b,c  are established. Some implicit summation formulae and general symmetry identities are derived by using different analytical means and applying generating functions. These results extend some known summations and identities of generalized Bernoulli numbers and polynomials

  10. Contributions to fuzzy polynomial techniques for stability analysis and control

    OpenAIRE

    Pitarch Pérez, José Luis

    2014-01-01

    The present thesis employs fuzzy-polynomial control techniques in order to improve the stability analysis and control of nonlinear systems. Initially, it reviews the more extended techniques in the field of Takagi-Sugeno fuzzy systems, such as the more relevant results about polynomial and fuzzy polynomial systems. The basic framework uses fuzzy polynomial models by Taylor series and sum-of-squares techniques (semidefinite programming) in order to obtain stability guarantees...

  11. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  12. Branched polynomial covering maps

    DEFF Research Database (Denmark)

    Hansen, Vagn Lundsgaard

    1999-01-01

    A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere....

  13. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    Science.gov (United States)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  14. On the number of polynomial solutions of Bernoulli and Abel polynomial differential equations

    Science.gov (United States)

    Cima, A.; Gasull, A.; Mañosas, F.

    2017-12-01

    In this paper we determine the maximum number of polynomial solutions of Bernoulli differential equations and of some integrable polynomial Abel differential equations. As far as we know, the tools used to prove our results have not been utilized before for studying this type of questions. We show that the addressed problems can be reduced to know the number of polynomial solutions of a related polynomial equation of arbitrary degree. Then we approach to these equations either applying several tools developed to study extended Fermat problems for polynomial equations, or reducing the question to the computation of the genus of some associated planar algebraic curves.

  15. On generalized Fibonacci and Lucas polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Nalli, Ayse [Department of Mathematics, Faculty of Sciences, Selcuk University, 42075 Campus-Konya (Turkey)], E-mail: aysenalli@yahoo.com; Haukkanen, Pentti [Department of Mathematics, Statistics and Philosophy, 33014 University of Tampere (Finland)], E-mail: mapehau@uta.fi

    2009-12-15

    Let h(x) be a polynomial with real coefficients. We introduce h(x)-Fibonacci polynomials that generalize both Catalan's Fibonacci polynomials and Byrd's Fibonacci polynomials and also the k-Fibonacci numbers, and we provide properties for these h(x)-Fibonacci polynomials. We also introduce h(x)-Lucas polynomials that generalize the Lucas polynomials and present properties of these polynomials. In the last section we introduce the matrix Q{sub h}(x) that generalizes the Q-matrix whose powers generate the Fibonacci numbers.

  16. H∞ Control of Polynomial Fuzzy Systems: A Sum of Squares Approach

    Directory of Open Access Journals (Sweden)

    Bomo W. Sanjaya

    2014-07-01

    Full Text Available This paper proposes the control design ofa nonlinear polynomial fuzzy system with H∞ performance objective using a sum of squares (SOS approach. Fuzzy model and controller are represented by a polynomial fuzzy model and controller. The design condition is obtained by using polynomial Lyapunov functions that not only guarantee stability but also satisfy the H∞ performance objective. The design condition is represented in terms of an SOS that can be numerically solved via the SOSTOOLS. A simulation study is presented to show the effectiveness of the SOS-based H∞ control designfor nonlinear polynomial fuzzy systems.

  17. Influence of regression model and initial intensity of an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and running performance.

    Science.gov (United States)

    Santos-Concejero, Jordan; Tucker, Ross; Granados, Cristina; Irazusta, Jon; Bidaurrazaga-Letona, Iraia; Zabala-Lili, Jon; Gil, Susana María

    2014-01-01

    This study investigated the influence of the regression model and initial intensity during an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and performance in elite-standard runners. Twenty-three well-trained runners completed a discontinuous incremental running test on a treadmill. Speed started at 9 km · h(-1) and increased by 1.5 km · h(-1) every 4 min until exhaustion, with a minute of recovery for blood collection. Lactate-speed data were fitted by exponential and polynomial models. The lactate threshold was determined for both models, using all the co-ordinates, excluding the first and excluding the first and second points. The exponential lactate threshold was greater than the polynomial equivalent in any co-ordinate condition (P performance and is independent of the initial intensity of the test.

  18. Generalized Heine–Stieltjes and Van Vleck polynomials associated with two-level, integrable BCS models

    International Nuclear Information System (INIS)

    Marquette, Ian; Links, Jon

    2012-01-01

    We study the Bethe ansatz/ordinary differential equation (BA/ODE) correspondence for Bethe ansatz equations that belong to a certain class of coupled, nonlinear, algebraic equations. Through this approach we numerically obtain the generalized Heine–Stieltjes and Van Vleck polynomials in the degenerate, two-level limit for four cases of integrable Bardeen–Cooper–Schrieffer (BCS) pairing models. These are the s-wave pairing model, the p + ip-wave pairing model, the p + ip pairing model coupled to a bosonic molecular pair degree of freedom, and a newly introduced extended d + id-wave pairing model with additional interactions. The zeros of the generalized Heine–Stieltjes polynomials provide solutions of the corresponding Bethe ansatz equations. We compare the roots of the ground states with curves obtained from the solution of a singular integral equation approximation, which allows for a characterization of ground-state phases in these systems. Our techniques also permit the computation of the roots of the excited states. These results illustrate how the BA/ODE correspondence can be used to provide new numerical methods to study a variety of integrable systems. (paper)

  19. Exponential time paradigms through the polynomial time lens

    NARCIS (Netherlands)

    Drucker, A.; Nederlof, J.; Santhanam, R.; Sankowski, P.; Zaroliagis, C.

    2016-01-01

    We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as

  20. Nonnegativity of uncertain polynomials

    Directory of Open Access Journals (Sweden)

    Šiljak Dragoslav D.

    1998-01-01

    Full Text Available The purpose of this paper is to derive tests for robust nonnegativity of scalar and matrix polynomials, which are algebraic, recursive, and can be completed in finite number of steps. Polytopic families of polynomials are considered with various characterizations of parameter uncertainty including affine, multilinear, and polynomic structures. The zero exclusion condition for polynomial positivity is also proposed for general parameter dependencies. By reformulating the robust stability problem of complex polynomials as positivity of real polynomials, we obtain new sufficient conditions for robust stability involving multilinear structures, which can be tested using only real arithmetic. The obtained results are applied to robust matrix factorization, strict positive realness, and absolute stability of multivariable systems involving parameter dependent transfer function matrices.

  1. Computing derivative-based global sensitivity measures using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, B.; Mai, C.V.

    2015-01-01

    In the field of computer experiments sensitivity analysis aims at quantifying the relative importance of each input parameter (or combinations thereof) of a computational model with respect to the model output uncertainty. Variance decomposition methods leading to the well-known Sobol' indices are recognized as accurate techniques, at a rather high computational cost though. The use of polynomial chaos expansions (PCE) to compute Sobol' indices has allowed to alleviate the computational burden though. However, when dealing with large dimensional input vectors, it is good practice to first use screening methods in order to discard unimportant variables. The derivative-based global sensitivity measures (DGSMs) have been developed recently in this respect. In this paper we show how polynomial chaos expansions may be used to compute analytically DGSMs as a mere post-processing. This requires the analytical derivation of derivatives of the orthonormal polynomials which enter PC expansions. Closed-form expressions for Hermite, Legendre and Laguerre polynomial expansions are given. The efficiency of the approach is illustrated on two well-known benchmark problems in sensitivity analysis. - Highlights: • Derivative-based global sensitivity measures (DGSM) have been developed for screening purpose. • Polynomial chaos expansions (PC) are used as a surrogate model of the original computational model. • From a PC expansion the DGSM can be computed analytically. • The paper provides the derivatives of Hermite, Legendre and Laguerre polynomials for this purpose

  2. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  3. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  4. On Multiple Polynomials of Capelli Type

    Directory of Open Access Journals (Sweden)

    S.Y. Antonov

    2016-03-01

    Full Text Available This paper deals with the class of Capelli polynomials in free associative algebra F{Z} (where F is an arbitrary field, Z is a countable set generalizing the construction of multiple Capelli polynomials. The fundamental properties of the introduced Capelli polynomials are provided. In particular, decomposition of the Capelli polynomials by means of the same type of polynomials is shown. Furthermore, some relations between their T -ideals are revealed. A connection between double Capelli polynomials and Capelli quasi-polynomials is established.

  5. Genetic Analysis of Milk Yield Using Random Regression Test Day Model in Tehran Province Holstein Dairy Cow

    Directory of Open Access Journals (Sweden)

    A. Seyeddokht

    2012-09-01

    Full Text Available In this research a random regression test day model was used to estimate heritability values and calculation genetic correlations between test day milk records. a total of 140357 monthly test day milk records belonging to 28292 first lactation Holstein cattle(trice time a day milking distributed in 165 herd and calved from 2001 to 2010 belonging to the herds of Tehran province were used. The fixed effects of herd-year-month of calving as contemporary group and age at calving and Holstein gene percentage as covariate were fitted. Orthogonal legendre polynomial with a 4th-order was implemented to take account of genetic and environmental aspects of milk production over the course of lactation. RRM using Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data. The results showed that the average of heritability for the second half of lactation period was higher than that of the first half. The heritability value for the first month was lowest (0.117 and for the eighth month of the lactation was highest (0.230 compared to the other months of lactation. Because of genetic variation was increased gradually, and residual variance was high in the first months of lactation, heritabilities were different over the course of lactation. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. In this research estimation of genetic parameters, and calculation genetic correlations were implemented by random regression test day model, therefore using this method is the exact way to take account of parameters rather than the other ways.

  6. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  7. Closed-form estimates of the domain of attraction for nonlinear systems via fuzzy-polynomial models.

    Science.gov (United States)

    Pitarch, José Luis; Sala, Antonio; Ariño, Carlos Vicente

    2014-04-01

    In this paper, the domain of attraction of the origin of a nonlinear system is estimated in closed form via level sets with polynomial boundaries, iteratively computed. In particular, the domain of attraction is expanded from a previous estimate, such as a classical Lyapunov level set. With the use of fuzzy-polynomial models, the domain of attraction analysis can be carried out via sum of squares optimization and an iterative algorithm. The result is a function that bounds the domain of attraction, free from the usual restriction of being positive and decrescent in all the interior of its level sets.

  8. Chromatic polynomials for simplicial complexes

    DEFF Research Database (Denmark)

    Møller, Jesper Michael; Nord, Gesche

    2016-01-01

    In this note we consider s s -chromatic polynomials for finite simplicial complexes. When s=1 s=1 , the 1 1 -chromatic polynomial is just the usual graph chromatic polynomial of the 1 1 -skeleton. In general, the s s -chromatic polynomial depends on the s s -skeleton and its value at r...

  9. H∞ Control of Polynomial Fuzzy Systems: A Sum of Squares Approach

    OpenAIRE

    Bomo W. Sanjaya; Bambang Riyanto Trilaksono; Arief Syaichu-Rohman

    2014-01-01

    This paper proposes the control design ofa nonlinear polynomial fuzzy system with H∞ performance objective using a sum of squares (SOS) approach. Fuzzy model and controller are represented by a polynomial fuzzy model and controller. The design condition is obtained by using polynomial Lyapunov functions that not only guarantee stability but also satisfy the H∞ performance objective. The design condition is represented in terms of an SOS that can be numerically solved via the SOSTOOLS. A simul...

  10. Roots of the Chromatic Polynomial

    DEFF Research Database (Denmark)

    Perrett, Thomas

    The chromatic polynomial of a graph G is a univariate polynomial whose evaluation at any positive integer q enumerates the proper q-colourings of G. It was introduced in connection with the famous four colour theorem but has recently found other applications in the field of statistical physics...... extend Thomassen’s technique to the Tutte polynomial and as a consequence, deduce a density result for roots of the Tutte polynomial. This partially answers a conjecture of Jackson and Sokal. Finally, we refocus our attention on the chromatic polynomial and investigate the density of chromatic roots...

  11. Automatic Control Systems Modeling by Volterra Polynomials

    Directory of Open Access Journals (Sweden)

    S. V. Solodusha

    2012-01-01

    Full Text Available The problem of the existence of the solutions of polynomial Volterra integral equations of the first kind of the second degree is considered. An algorithm of the numerical solution of one class of Volterra nonlinear systems of the first kind is developed. Numerical results for test examples are presented.

  12. Method of moments approach to pricing double barrier contracts in polynomial jump-diffusion models

    NARCIS (Netherlands)

    Eriksson, B.; Pistorius, M.

    2011-01-01

    Abstract: We present a method of moments approach to pricing double barrier contracts when the underlying is modelled by a polynomial jump-diffusion. By general principles the price is linked to certain infinite dimensional linear programming problems. Subsequently approximating these by finite

  13. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  14. Regression Models for Market-Shares

    DEFF Research Database (Denmark)

    Birch, Kristina; Olsen, Jørgen Kai; Tjur, Tue

    2005-01-01

    On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put on the interpretat......On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put...... on the interpretation of the parameters in relation to models for the total sales based on discrete choice models.Key words and phrases. MCI model, discrete choice model, market-shares, price elasitcity, regression model....

  15. General Reducibility and Solvability of Polynomial Equations ...

    African Journals Online (AJOL)

    General Reducibility and Solvability of Polynomial Equations. ... Unlike quadratic, cubic, and quartic polynomials, the general quintic and higher degree polynomials cannot be solved algebraically in terms of finite number of additions, ... Galois Theory, Solving Polynomial Systems, Polynomial factorization, Polynomial Ring ...

  16. Certain non-linear differential polynomials sharing a non zero polynomial

    Directory of Open Access Journals (Sweden)

    Majumder Sujoy

    2015-10-01

    functions sharing a nonzero polynomial and obtain two results which improves and generalizes the results due to L. Liu [Uniqueness of meromorphic functions and differential polynomials, Comput. Math. Appl., 56 (2008, 3236-3245.] and P. Sahoo [Uniqueness and weighted value sharing of meromorphic functions, Applied. Math. E-Notes., 11 (2011, 23-32.].

  17. Scramjet Isolator Modeling and Control

    Science.gov (United States)

    2011-12-01

    Layer Interactions,” (NATO) AGARD CP 193, May 1976. 17. Cox, C., Lewis, C., Pap, R., Glover, C., Priddy, K., Edwards, J., and McCarty, D., “Prediction...Static Polynomial Model . . . . . . . . . . . . . . . . . . 73 5.2 Continuous Linear Model with Static Polynomial Input . 75 5.3 ARX Models with Static...Vector of NARX model regression values . . . . . . . . . . 70 Nr Number of samples for a run . . . . . . . . . . . . . . . . 73 ΘNL Vector of

  18. Exergy Analysis of a Subcritical Reheat Steam Power Plant with Regression Modeling and Optimization

    Directory of Open Access Journals (Sweden)

    MUHIB ALI RAJPER

    2016-07-01

    Full Text Available In this paper, exergy analysis of a 210 MW SPP (Steam Power Plant is performed. Firstly, the plant is modeled and validated, followed by a parametric study to show the effects of various operating parameters on the performance parameters. The net power output, energy efficiency, and exergy efficiency are taken as the performance parameters, while the condenser pressure, main steam pressure, bled steam pressures, main steam temperature, and reheat steam temperature isnominated as the operating parameters. Moreover, multiple polynomial regression models are developed to correlate each performance parameter with the operating parameters. The performance is then optimizedby using Direct-searchmethod. According to the results, the net power output, energy efficiency, and exergy efficiency are calculated as 186.5 MW, 31.37 and 30.41%, respectively under normal operating conditions as a base case. The condenser is a major contributor towards the energy loss, followed by the boiler, whereas the highest irreversibilities occur in the boiler and turbine. According to the parametric study, variation in the operating parameters greatly influences the performance parameters. The regression models have appeared to be a good estimator of the performance parameters. The optimum net power output, energy efficiency and exergy efficiency are obtained as 227.6 MW, 37.4 and 36.4, respectively, which have been calculated along with optimal values of selected operating parameters.

  19. Polynomial Heisenberg algebras

    International Nuclear Information System (INIS)

    Carballo, Juan M; C, David J Fernandez; Negro, Javier; Nieto, Luis M

    2004-01-01

    Polynomial deformations of the Heisenberg algebra are studied in detail. Some of their natural realizations are given by the higher order susy partners (and not only by those of first order, as is already known) of the harmonic oscillator for even-order polynomials. Here, it is shown that the susy partners of the radial oscillator play a similar role when the order of the polynomial is odd. Moreover, it will be proved that the general systems ruled by such kinds of algebras, in the quadratic and cubic cases, involve Painleve transcendents of types IV and V, respectively

  20. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    Science.gov (United States)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  1. Polynomial optimization : Error analysis and applications

    NARCIS (Netherlands)

    Sun, Zhao

    2015-01-01

    Polynomial optimization is the problem of minimizing a polynomial function subject to polynomial inequality constraints. In this thesis we investigate several hierarchies of relaxations for polynomial optimization problems. Our main interest lies in understanding their performance, in particular how

  2. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  3. Birth-death processes and associated polynomials

    NARCIS (Netherlands)

    van Doorn, Erik A.

    2003-01-01

    We consider birth-death processes on the nonnegative integers and the corresponding sequences of orthogonal polynomials called birth-death polynomials. The sequence of associated polynomials linked with a sequence of birth-death polynomials and its orthogonalizing measure can be used in the analysis

  4. Extended biorthogonal matrix polynomials

    Directory of Open Access Journals (Sweden)

    Ayman Shehata

    2017-01-01

    Full Text Available The pair of biorthogonal matrix polynomials for commutative matrices were first introduced by Varma and Tasdelen in [22]. The main aim of this paper is to extend the properties of the pair of biorthogonal matrix polynomials of Varma and Tasdelen and certain generating matrix functions, finite series, some matrix recurrence relations, several important properties of matrix differential recurrence relations, biorthogonality relations and matrix differential equation for the pair of biorthogonal matrix polynomials J(A,B n (x, k and K(A,B n (x, k are discussed. For the matrix polynomials J(A,B n (x, k, various families of bilinear and bilateral generating matrix functions are constructed in the sequel.

  5. Learning Read-constant Polynomials of Constant Degree modulo Composites

    DEFF Research Database (Denmark)

    Chattopadhyay, Arkadev; Gavaldá, Richard; Hansen, Kristoffer Arnsfelt

    2011-01-01

    Boolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class \\textACC0ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class...... is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable...

  6. Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling

    Science.gov (United States)

    Dobronets, B. S.; Popova, O. A.

    2018-05-01

    Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.

  7. Bannai-Ito polynomials and dressing chains

    OpenAIRE

    Derevyagin, Maxim; Tsujimoto, Satoshi; Vinet, Luc; Zhedanov, Alexei

    2012-01-01

    Schur-Delsarte-Genin (SDG) maps and Bannai-Ito polynomials are studied. SDG maps are related to dressing chains determined by quadratic algebras. The Bannai-Ito polynomials and their kernel polynomials -- the complementary Bannai-Ito polynomials -- are shown to arise in the framework of the SDG maps.

  8. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  9. Reduced Multivariate Polynomial Model for Manufacturing Costs Estimation of Piping Elements

    Directory of Open Access Journals (Sweden)

    Nibaldo Rodriguez

    2013-01-01

    Full Text Available This paper discusses the development and evaluation of an estimation model of manufacturing costs of piping elements through the application of a Reduced Multivariate Polynomial (RMP. The model allows obtaining accurate estimations, even when enough and adequate information is not available. This situation typically occurs in the early stages of the design process of industrial products. The experimental evaluations show that the approach is capable, with a low complexity, of reducing uncertainties and to predict costs with significant precision. Comparisons with a neural network showed also that the RMP performs better considering a set of classical performance measures with the corresponding lower complexity and higher accuracy.

  10. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  11. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  12. Characterization of vegetative and grain filling periods of winter wheat by stepwise regression procedure. II. Grain filling period

    Directory of Open Access Journals (Sweden)

    Pržulj Novo

    2011-01-01

    Full Text Available In wheat, rate and duration of dry matter accumulation and remobilization depend on genotype and growing conditions. The objective of this study was to determine the most appropriate polynomial regression of stepwise regression procedure for describing grain filling period in three winter wheat cultivars. The stepwise regression procedure showed that grain filling is a complex biological process and that it is difficult to offer a simple and appropriate polynomial equation that fits the pattern of changes in dry matter accumulation during the grain filling period, i.e., from anthesis to maximum grain weight, in winter wheat. If grain filling is to be represented with a high power polynomial, quartic and quintic equations showed to be most appropriate. In spite of certain disadvantages, a cubic equation of stepwise regression could be used for describing the pattern of winter wheat grain filling.

  13. Vortices and polynomials: non-uniqueness of the Adler–Moser polynomials for the Tkachenko equation

    International Nuclear Information System (INIS)

    Demina, Maria V; Kudryashov, Nikolai A

    2012-01-01

    Stationary and translating relative equilibria of point vortices in the plane are studied. It is shown that stationary equilibria of any system containing point vortices with arbitrary choice of circulations can be described with the help of the Tkachenko equation. It is also obtained that translating relative equilibria of point vortices with arbitrary circulations can be constructed using a generalization of the Tkachenko equation. Roots of any pair of polynomials solving the Tkachenko equation and the generalized Tkachenko equation are proved to give positions of point vortices in stationary and translating relative equilibria accordingly. These results are valid even if the polynomials in a pair have multiple or common roots. It is obtained that the Adler–Moser polynomial provides non-unique polynomial solutions of the Tkachenko equation. It is shown that the generalized Tkachenko equation possesses polynomial solutions with degrees that are not triangular numbers. (paper)

  14. Generalizations of orthogonal polynomials

    Science.gov (United States)

    Bultheel, A.; Cuyt, A.; van Assche, W.; van Barel, M.; Verdonk, B.

    2005-07-01

    We give a survey of recent generalizations of orthogonal polynomials. That includes multidimensional (matrix and vector orthogonal polynomials) and multivariate versions, multipole (orthogonal rational functions) variants, and extensions of the orthogonality conditions (multiple orthogonality). Most of these generalizations are inspired by the applications in which they are applied. We also give a glimpse of these applications, which are usually generalizations of applications where classical orthogonal polynomials also play a fundamental role: moment problems, numerical quadrature, rational approximation, linear algebra, recurrence relations, and random matrices.

  15. Stochastic Estimation via Polynomial Chaos

    Science.gov (United States)

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  16. A New Generalisation of Macdonald Polynomials

    Science.gov (United States)

    Garbali, Alexandr; de Gier, Jan; Wheeler, Michael

    2017-06-01

    We introduce a new family of symmetric multivariate polynomials, whose coefficients are meromorphic functions of two parameters ( q, t) and polynomial in a further two parameters ( u, v). We evaluate these polynomials explicitly as a matrix product. At u = v = 0 they reduce to Macdonald polynomials, while at q = 0, u = v = s they recover a family of inhomogeneous symmetric functions originally introduced by Borodin.

  17. Special polynomials associated with some hierarchies

    International Nuclear Information System (INIS)

    Kudryashov, Nikolai A.

    2008-01-01

    Special polynomials associated with rational solutions of a hierarchy of equations of Painleve type are introduced. The hierarchy arises by similarity reduction from the Fordy-Gibbons hierarchy of partial differential equations. Some relations for these special polynomials are given. Differential-difference hierarchies for finding special polynomials are presented. These formulae allow us to obtain special polynomials associated with the hierarchy studied. It is shown that rational solutions of members of the Schwarz-Sawada-Kotera, the Schwarz-Kaup-Kupershmidt, the Fordy-Gibbons, the Sawada-Kotera and the Kaup-Kupershmidt hierarchies can be expressed through special polynomials of the hierarchy studied

  18. Twisted Polynomials and Forgery Attacks on GCM

    DEFF Research Database (Denmark)

    Abdelraheem, Mohamed Ahmed A. M. A.; Beelen, Peter; Bogdanov, Andrey

    2015-01-01

    Polynomial hashing as an instantiation of universal hashing is a widely employed method for the construction of MACs and authenticated encryption (AE) schemes, the ubiquitous GCM being a prominent example. It is also used in recent AE proposals within the CAESAR competition which aim at providing...... in an improved key recovery algorithm. As cryptanalytic applications of our twisted polynomials, we develop the first universal forgery attacks on GCM in the weak-key model that do not require nonce reuse. Moreover, we present universal weak-key forgeries for the nonce-misuse resistant AE scheme POET, which...

  19. Exponential-Polynomial Families and the Term Structure of Interest Rates

    OpenAIRE

    Filipovic, Damir

    2000-01-01

    Exponential-polynomial families like the Nelson-Siegel or Svensson family are widely used to estimate the current forward rate curve. We investigate whether these methods go well with inter-temporal modelling. We characterize the consistent Ito processes which have the property to provide an arbitrage free interest rate model when representing the parameters of some bounded exponential-polynomial type function. This includes in particular diffusion processes. We show that there is a strong li...

  20. A Summation Formula for Macdonald Polynomials

    Science.gov (United States)

    de Gier, Jan; Wheeler, Michael

    2016-03-01

    We derive an explicit sum formula for symmetric Macdonald polynomials. Our expression contains multiple sums over the symmetric group and uses the action of Hecke generators on the ring of polynomials. In the special cases {t = 1} and {q = 0}, we recover known expressions for the monomial symmetric and Hall-Littlewood polynomials, respectively. Other specializations of our formula give new expressions for the Jack and q-Whittaker polynomials.

  1. Noncommutative Schur polynomials and the crystal limit of the U_{q} \\widehat{\\mathfrak {sl}}(2)-vertex model

    Science.gov (United States)

    Korff, Christian

    2010-10-01

    Starting from the Verma module of U_{q}\\mathfrak {sl}(2) we consider the evaluation module for affine U_{q}\\widehat{\\mathfrak {sl}}(2) and discuss its crystal limit (q → 0). There exists an associated integrable statistical mechanics model on a square lattice defined in terms of vertex configurations. Its transfer matrix is the generating function for noncommutative complete symmetric polynomials in the generators of the affine plactic algebra, an extension of the finite plactic algebra first discussed by Lascoux and Schützenberger. The corresponding noncommutative elementary symmetric polynomials were recently shown to be generated by the transfer matrix of the so-called phase model discussed by Bogoliubov, Izergin and Kitanine. Here we establish that both generating functions satisfy Baxter's TQ-equation in the crystal limit by tying them to special U_{q}\\widehat{ \\mathfrak {sl}}(2) solutions of the Yang-Baxter equation. The TQ-equation amounts to the well-known Jacobi-Trudi formula leading naturally to the definition of noncommutative Schur polynomials. The latter can be employed to define a ring which has applications in conformal field theory and enumerative geometry: it is isomorphic to the fusion ring of the \\widehat{\\mathfrak {sl}}(n)_{k} Wess-Zumino-Novikov-Witten model whose structure constants are the dimensions of spaces of generalized θ-functions over the Riemann sphere with three punctures.

  2. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  3. Recurrence approach and higher order polynomial algebras for superintegrable monopole systems

    Science.gov (United States)

    Hoque, Md Fazlul; Marquette, Ian; Zhang, Yao-Zhong

    2018-05-01

    We revisit the MIC-harmonic oscillator in flat space with monopole interaction and derive the polynomial algebra satisfied by the integrals of motion and its energy spectrum using the ad hoc recurrence approach. We introduce a superintegrable monopole system in a generalized Taub-Newman-Unti-Tamburino (NUT) space. The Schrödinger equation of this model is solved in spherical coordinates in the framework of Stäckel transformation. It is shown that wave functions of the quantum system can be expressed in terms of the product of Laguerre and Jacobi polynomials. We construct ladder and shift operators based on the corresponding wave functions and obtain the recurrence formulas. By applying these recurrence relations, we construct higher order algebraically independent integrals of motion. We show that the integrals form a polynomial algebra. We construct the structure functions of the polynomial algebra and obtain the degenerate energy spectra of the model.

  4. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  5. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  6. Weierstrass polynomials for links

    DEFF Research Database (Denmark)

    Hansen, Vagn Lundsgaard

    1997-01-01

    There is a natural way of identifying links in3-space with polynomial covering spaces over thecircle. Thereby any link in 3-space can be definedby a Weierstrass polynomial over the circle. Theequivalence relation for covering spaces over thecircle is, however, completely different from...

  7. On Symmetric Polynomials

    OpenAIRE

    Golden, Ryan; Cho, Ilwoo

    2015-01-01

    In this paper, we study structure theorems of algebras of symmetric functions. Based on a certain relation on elementary symmetric polynomials generating such algebras, we consider perturbation in the algebras. In particular, we understand generators of the algebras as perturbations. From such perturbations, define injective maps on generators, which induce algebra-monomorphisms (or embeddings) on the algebras. They provide inductive structure theorems on algebras of symmetric polynomials. As...

  8. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    Science.gov (United States)

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  9. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    Science.gov (United States)

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  10. Associated polynomials and birth-death processes

    NARCIS (Netherlands)

    van Doorn, Erik A.

    2001-01-01

    We consider sequences of orthogonal polynomials with positive zeros, and pursue the question of how (partial) knowledge of the orthogonalizing measure for the {\\it associated polynomials} can lead to information about the orthogonalizing measure for the original polynomials, with a view to

  11. Design of a Polynomial Fuzzy Observer Controller With Sampled-Output Measurements for Nonlinear Systems Considering Unmeasurable Premise Variables

    OpenAIRE

    Liu, Chuang; Lam, H. K.

    2015-01-01

    In this paper, we propose a polynomial fuzzy observer controller for nonlinear systems, where the design is achieved through the stability analysis of polynomial-fuzzy-model-based (PFMB) observer-control system. The polynomial fuzzy observer estimates the system states using estimated premise variables. The estimated states are then employed by the polynomial fuzzy controller for the feedback control of nonlinear systems represented by the polynomial fuzzy model. The system stability of the P...

  12. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  13. Scattering theory and orthogonal polynomials

    International Nuclear Information System (INIS)

    Geronimo, J.S.

    1977-01-01

    The application of the techniques of scattering theory to the study of polynomials orthogonal on the unit circle and a finite segment of the real line is considered. The starting point is the recurrence relations satisfied by the polynomials instead of the orthogonality condition. A set of two two terms recurrence relations for polynomials orthogonal on the real line is presented and used. These recurrence relations play roles analogous to those satisfied by polynomials orthogonal on unit circle. With these recurrence formulas a Wronskian theorem is proved and the Christoffel-Darboux formula is derived. In scattering theory a fundamental role is played by the Jost function. An analogy is deferred of this function and its analytic properties and the locations of its zeros investigated. The role of the analog Jost function in various properties of these orthogonal polynomials is investigated. The techniques of inverse scattering theory are also used. The discrete analogues of the Gelfand-Levitan and Marchenko equations are derived and solved. These techniques are used to calculate asymptotic formulas for the orthogonal polynomials. Finally Szego's theorem on toeplitz and Hankel determinants is proved using the recurrence formulas and some properties of the Jost function. The techniques of inverse scattering theory are used to calculate the correction terms

  14. An Additive-Multiplicative Cox-Aalen Regression Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...

  15. Fermionic formula for double Kostka polynomials

    OpenAIRE

    Liu, Shiyuan

    2016-01-01

    The $X=M$ conjecture asserts that the $1D$ sum and the fermionic formula coincide up to some constant power. In the case of type $A,$ both the $1D$ sum and the fermionic formula are closely related to Kostka polynomials. Double Kostka polynomials $K_{\\Bla,\\Bmu}(t),$ indexed by two double partitions $\\Bla,\\Bmu,$ are polynomials in $t$ introduced as a generalization of Kostka polynomials. In the present paper, we consider $K_{\\Bla,\\Bmu}(t)$ in the special case where $\\Bmu=(-,\\mu'').$ We formula...

  16. Relations between Möbius and coboundary polynomials

    NARCIS (Netherlands)

    Jurrius, R.P.M.J.

    2012-01-01

    It is known that, in general, the coboundary polynomial and the Möbius polynomial of a matroid do not determine each other. Less is known about more specific cases. In this paper, we will investigate if it is possible that the Möbius polynomial of a matroid, together with the Möbius polynomial of

  17. Arabic text classification using Polynomial Networks

    Directory of Open Access Journals (Sweden)

    Mayy M. Al-Tahrawi

    2015-10-01

    Full Text Available In this paper, an Arabic statistical learning-based text classification system has been developed using Polynomial Neural Networks. Polynomial Networks have been recently applied to English text classification, but they were never used for Arabic text classification. In this research, we investigate the performance of Polynomial Networks in classifying Arabic texts. Experiments are conducted on a widely used Arabic dataset in text classification: Al-Jazeera News dataset. We chose this dataset to enable direct comparisons of the performance of Polynomial Networks classifier versus other well-known classifiers on this dataset in the literature of Arabic text classification. Results of experiments show that Polynomial Networks classifier is a competitive algorithm to the state-of-the-art ones in the field of Arabic text classification.

  18. Comparative Performance of Complex-Valued B-Spline and Polynomial Models Applied to Iterative Frequency-Domain Decision Feedback Equalization of Hammerstein Channels.

    Science.gov (United States)

    Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J

    2017-12-01

    Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.

  19. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  20. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  1. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  2. An Adaptive Channel Estimation Algorithm Using Time-Frequency Polynomial Model for OFDM with Fading Multipath Channels

    Directory of Open Access Journals (Sweden)

    Liu KJ Ray

    2002-01-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM is an effective technique for the future 3G communications because of its great immunity to impulse noise and intersymbol interference. The channel estimation is a crucial aspect in the design of OFDM systems. In this work, we propose a channel estimation algorithm based on a time-frequency polynomial model of the fading multipath channels. The algorithm exploits the correlation of the channel responses in both time and frequency domains and hence reduce more noise than the methods using only time or frequency polynomial model. The estimator is also more robust compared to the existing methods based on Fourier transform. The simulation shows that it has more than improvement in terms of mean-squared estimation error under some practical channel conditions. The algorithm needs little prior knowledge about the delay and fading properties of the channel. The algorithm can be implemented recursively and can adjust itself to follow the variation of the channel statistics.

  3. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  4. Fractional order differentiation by integration with Jacobi polynomials

    KAUST Repository

    Liu, Dayan

    2012-12-01

    The differentiation by integration method with Jacobi polynomials was originally introduced by Mboup, Join and Fliess [22], [23]. This paper generalizes this method from the integer order to the fractional order for estimating the fractional order derivatives of noisy signals. The proposed fractional order differentiator is deduced from the Jacobi orthogonal polynomial filter and the Riemann-Liouville fractional order derivative definition. Exact and simple formula for this differentiator is given where an integral formula involving Jacobi polynomials and the noisy signal is used without complex mathematical deduction. Hence, it can be used both for continuous-time and discrete-time models. The comparison between our differentiator and the recently introduced digital fractional order Savitzky-Golay differentiator is given in numerical simulations so as to show its accuracy and robustness with respect to corrupting noises. © 2012 IEEE.

  5. Fractional order differentiation by integration with Jacobi polynomials

    KAUST Repository

    Liu, Dayan; Gibaru, O.; Perruquetti, Wilfrid; Laleg-Kirati, Taous-Meriem

    2012-01-01

    The differentiation by integration method with Jacobi polynomials was originally introduced by Mboup, Join and Fliess [22], [23]. This paper generalizes this method from the integer order to the fractional order for estimating the fractional order derivatives of noisy signals. The proposed fractional order differentiator is deduced from the Jacobi orthogonal polynomial filter and the Riemann-Liouville fractional order derivative definition. Exact and simple formula for this differentiator is given where an integral formula involving Jacobi polynomials and the noisy signal is used without complex mathematical deduction. Hence, it can be used both for continuous-time and discrete-time models. The comparison between our differentiator and the recently introduced digital fractional order Savitzky-Golay differentiator is given in numerical simulations so as to show its accuracy and robustness with respect to corrupting noises. © 2012 IEEE.

  6. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  7. Quantum models with energy-dependent potentials solvable in terms of exceptional orthogonal polynomials

    International Nuclear Information System (INIS)

    Schulze-Halberg, Axel; Roy, Pinaki

    2017-01-01

    We construct energy-dependent potentials for which the Schrödinger equations admit solutions in terms of exceptional orthogonal polynomials. Our method of construction is based on certain point transformations, applied to the equations of exceptional Hermite, Jacobi and Laguerre polynomials. We present several examples of boundary-value problems with energy-dependent potentials that admit a discrete spectrum and the corresponding normalizable solutions in closed form.

  8. Quantum models with energy-dependent potentials solvable in terms of exceptional orthogonal polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu [Department of Mathematics and Actuarial Science, Indiana University Northwest, 3400 Broadway, Gary IN 46408 (United States); Department of Physics, Indiana University Northwest, 3400 Broadway, Gary IN 46408 (United States); Roy, Pinaki, E-mail: pinaki@isical.ac.in [Physics and Applied Mathematics Unit, Indian Statistical Institute, Kolkata 700108 (India)

    2017-03-15

    We construct energy-dependent potentials for which the Schrödinger equations admit solutions in terms of exceptional orthogonal polynomials. Our method of construction is based on certain point transformations, applied to the equations of exceptional Hermite, Jacobi and Laguerre polynomials. We present several examples of boundary-value problems with energy-dependent potentials that admit a discrete spectrum and the corresponding normalizable solutions in closed form.

  9. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab; Zedler, Sarah E.; Knio, Omar; Jackson, Charles S.; Hoteit, Ibrahim

    2016-01-01

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference

  10. On the Laurent polynomial rings

    International Nuclear Information System (INIS)

    Stefanescu, D.

    1985-02-01

    We describe some properties of the Laurent polynomial rings in a finite number of indeterminates over a commutative unitary ring. We study some subrings of the Laurent polynomial rings. We finally obtain two cancellation properties. (author)

  11. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    Directory of Open Access Journals (Sweden)

    Thirumoorthy Palanisamy

    Full Text Available Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  12. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators; Modelos de regresion en la determinacion de la dosis absorbida con camara de extrapolacion para aplicadores oftalmologicos

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez R, J T; Morales P, R

    1992-06-15

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ({sup 90} Sr/{sup 90} Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)

  13. Damage Identification of Bridge Based on Chebyshev Polynomial Fitting and Fuzzy Logic without Considering Baseline Model Parameters

    Directory of Open Access Journals (Sweden)

    Yu-Bo Jiao

    2015-01-01

    Full Text Available The paper presents an effective approach for damage identification of bridge based on Chebyshev polynomial fitting and fuzzy logic systems without considering baseline model data. The modal curvature of damaged bridge can be obtained through central difference approximation based on displacement modal shape. Depending on the modal curvature of damaged structure, Chebyshev polynomial fitting is applied to acquire the curvature of undamaged one without considering baseline parameters. Therefore, modal curvature difference can be derived and used for damage localizing. Subsequently, the normalized modal curvature difference is treated as input variable of fuzzy logic systems for damage condition assessment. Numerical simulation on a simply supported bridge was carried out to demonstrate the feasibility of the proposed method.

  14. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  15. Best polynomial degree reduction on q-lattices with applications to q-orthogonal polynomials

    KAUST Repository

    Ait-Haddou, Rachid

    2015-06-07

    We show that a weighted least squares approximation of q-Bézier coefficients provides the best polynomial degree reduction in the q-L2-norm. We also provide a finite analogue of this result with respect to finite q-lattices and we present applications of these results to q-orthogonal polynomials. © 2015 Elsevier Inc. All rights reserved.

  16. Best polynomial degree reduction on q-lattices with applications to q-orthogonal polynomials

    KAUST Repository

    Ait-Haddou, Rachid; Goldman, Ron

    2015-01-01

    We show that a weighted least squares approximation of q-Bézier coefficients provides the best polynomial degree reduction in the q-L2-norm. We also provide a finite analogue of this result with respect to finite q-lattices and we present applications of these results to q-orthogonal polynomials. © 2015 Elsevier Inc. All rights reserved.

  17. Computing the Alexander Polynomial Numerically

    DEFF Research Database (Denmark)

    Hansen, Mikael Sonne

    2006-01-01

    Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically.......Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically....

  18. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  19. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  20. Density of Real Zeros of the Tutte Polynomial

    DEFF Research Database (Denmark)

    Ok, Seongmin; Perrett, Thomas

    2018-01-01

    The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane. This ....... This is the first density result for the real zeros of the Tutte polynomial in a region of positive volume. Our result almost confirms a conjecture of Jackson and Sokal except for one region which is related to an open problem on flow polynomials.......The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane...

  1. Density of Real Zeros of the Tutte Polynomial

    DEFF Research Database (Denmark)

    Ok, Seongmin; Perrett, Thomas

    2017-01-01

    The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane. This ....... This is the first density result for the real zeros of the Tutte polynomial in a region of positive volume. Our result almost confirms a conjecture of Jackson and Sokal except for one region which is related to an open problem on flow polynomials.......The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane...

  2. Parallel Construction of Irreducible Polynomials

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg

    Let arithmetic pseudo-NC^k denote the problems that can be solved by log space uniform arithmetic circuits over the finite prime field GF(p) of depth O(log^k (n + p)) and size polynomial in (n + p). We show that the problem of constructing an irreducible polynomial of specified degree over GF(p) ...... of polynomials is in arithmetic NC^3. Our algorithm works over any field and compared to other known algorithms it does not assume the ability to take p'th roots when the field has characteristic p....

  3. Generalized Hermite polynomials in superspace as eigenfunctions of the supersymmetric rational CMS model

    CERN Document Server

    Desrosiers, P; Mathieu, P; Desrosiers, Patrick; Lapointe, Luc; Mathieu, Pierre

    2003-01-01

    We present two constructions of the orthogonal eigenfunctions of the supersymmetric extension of the rational Calogero-Moser-Sutherland model with harmonic confinement. These eigenfunctions are the superspace extension of the generalized Hermite (or Hi-Jack) polynomials. The conserved quantities of the rational supersymmetric model are first related to their trigonometric relatives through a similarity transformation. This leads to a simple expression for the generalized Hermite superpolynomials as a differential operator acting on the corresponding Jack superpolynomials. The second construction relies on the action of the Hamiltonian on the supermonomial basis. This translates into determinantal expressions for the Hamiltonian's eigenfunctions. As an aside, the maximal superintegrability of the supersymmetric rational Calogero-Moser-Sutherland model is demonstrated.

  4. Uso de modelos de regressão aleatória para descrever a variação genética da produção de leite na raça Holandesa Random regressions models to describe the genetic variation of milk yield in Holstein breed

    Directory of Open Access Journals (Sweden)

    Cláudio Vieira de Araújo

    2006-06-01

    . The Wilmink's exponential function, the Ali and Schaeffer logarithmic function and the Legendre orthogonal polynomials of second and fourth order were used. The comparisons among the models were based on the following criteria: estimates of variance components of the multiple-trait model and random regressions models, values of residual variance and values of the logarithms of the likelihood functions. The heritability estimates obtained using the multiple-trait model varied from 0.110 to 0.244, for the random regression models the values ranged from 0.127 to 0.301, being the largest estimates observed in the models with larger number of parameters. The random regression models which used the Legendre polynomials was the model which better described the genetic variation of the milk yield.

  5. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  6. Modeling the kinetics of essential oil hydrodistillation from juniper berries (Juniperus communis L. using non-linear regression

    Directory of Open Access Journals (Sweden)

    Radosavljević Dragana B.

    2017-01-01

    Full Text Available This paper presents kinetics modeling of essential oil hydrodistillation from juniper berries (Juniperus communis L. by using a non-linear regression methodology. The proposed model has the polynomial-logarithmic form. The initial equation of the proposed non-linear model is q = q∞•(a•(logt2 + b•logt + c and by substituting a1=q∞•a, b1 = q∞•b and c1 = q∞•c, the final equation is obtained as q = a1•(logt2 + b1•logt + c1. In this equation q is the quantity of the obtained oil at time t, while a1, b1 and c1 are parameters to be determined for each sample. From the final equation it can be seen that the key parameter q∞, which presents the maximal oil quantity obtained after infinite time, is already included in parameters a1, b1 and c1. In this way, experimental determination of this parameter is avoided. Using the proposed model with parameters obtained by regression, the values of oil hydrodistillation in time are calculated for each sample and compared to the experimental values. In addition, two kinetic models previously proposed in literature were applied to the same experimental results. The developed model provided better agreements with the experimental values than the two, generally accepted kinetic models of this process. The average values of error measures (RSS, RSE, AIC and MRPD obtained for our model (0.005; 0.017; –84.33; 1.65 were generally lower than the corresponding values of the other two models (0.025; 0.041; –53.20; 3.89 and (0.0035; 0.015; –86.83; 1.59. Also, parameter estimation for the proposed model was significantly simpler (maximum 2 iterations per sample using the non-linear regression than that for the existing models (maximum 9 iterations per sample. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR-35026

  7. Using sparse polynomial chaos expansions for the global sensitivity analysis of groundwater lifetime expectancy in a multi-layered hydrogeological model

    International Nuclear Information System (INIS)

    Deman, G.; Konakli, K.; Sudret, B.; Kerrou, J.; Perrochet, P.; Benabderrahmane, H.

    2016-01-01

    The study makes use of polynomial chaos expansions to compute Sobol' indices within the frame of a global sensitivity analysis of hydro-dispersive parameters in a simplified vertical cross-section of a segment of the subsurface of the Paris Basin. Applying conservative ranges, the uncertainty in 78 input variables is propagated upon the mean lifetime expectancy of water molecules departing from a specific location within a highly confining layer situated in the middle of the model domain. Lifetime expectancy is a hydrogeological performance measure pertinent to safety analysis with respect to subsurface contaminants, such as radionuclides. The sensitivity analysis indicates that the variability in the mean lifetime expectancy can be sufficiently explained by the uncertainty in the petrofacies, i.e. the sets of porosity and hydraulic conductivity, of only a few layers of the model. The obtained results provide guidance regarding the uncertainty modeling in future investigations employing detailed numerical models of the subsurface of the Paris Basin. Moreover, the study demonstrates the high efficiency of sparse polynomial chaos expansions in computing Sobol' indices for high-dimensional models. - Highlights: • Global sensitivity analysis of a 2D 15-layer groundwater flow model is conducted. • A high-dimensional random input comprising 78 parameters is considered. • The variability in the mean lifetime expectancy for the central layer is examined. • Sparse polynomial chaos expansions are used to compute Sobol' sensitivity indices. • The petrofacies of a few layers can sufficiently explain the response variance.

  8. Forecasting carbon dioxide emissions based on a hybrid of mixed data sampling regression model and back propagation neural network in the USA.

    Science.gov (United States)

    Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir

    2018-01-01

    The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.

  9. Confluent hypergeometric orthogonal polynomials related to the rational quantum Calogero system with harmonic confinement

    International Nuclear Information System (INIS)

    van Diejen, J.F.

    1997-01-01

    Two families (type A and type B) of confluent hypergeometric polynomials in several variables are studied. We describe the orthogonality properties, differential equations, and Pieri-type recurrence formulas for these families. In the one-variable case, the polynomials in question reduce to the Hermite polynomials (type A) and the Laguerre polynomials (type B), respectively. The multivariable confluent hypergeometric families considered here may be used to diagonalize the rational quantum Calogero models with harmonic confinement (for the classical root systems) and are closely connected to the (symmetric) generalized spherical harmonics investigated by Dunkl. (orig.)

  10. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    International Nuclear Information System (INIS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-01-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines

  11. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Science.gov (United States)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  12. Modeling of energy consumption and related GHG (greenhouse gas) intensity and emissions in Europe using general regression neural networks

    International Nuclear Information System (INIS)

    Antanasijević, Davor; Pocajt, Viktor; Ristić, Mirjana; Perić-Grujić, Aleksandra

    2015-01-01

    This paper presents a new approach for the estimation of energy-related GHG (greenhouse gas) emissions at the national level that combines the simplicity of the concept of GHG intensity and the generalization capabilities of ANNs (artificial neural networks). The main objectives of this work includes the determination of the accuracy of a GRNN (general regression neural network) model applied for the prediction of EC (energy consumption) and GHG intensity of energy consumption, utilizing general country statistics as inputs, as well as analysis of the accuracy of energy-related GHG emissions obtained by multiplying the two aforementioned outputs. The models were developed using historical data from the period 2004–2012, for a set of 26 European countries (EU Members). The obtained results demonstrate that the GRNN GHG intensity model provides a more accurate prediction, with the MAPE (mean absolute percentage error) of 4.5%, than tested MLR (multiple linear regression) and second-order and third-order non-linear MPR (multiple polynomial regression) models. Also, the GRNN EC model has high accuracy (MAPE = 3.6%), and therefore both GRNN models and the proposed approach can be considered as suitable for the calculation of GHG emissions. The energy-related predicted GHG emissions were very similar to the actual GHG emissions of EU Members (MAPE = 6.4%). - Highlights: • ANN modeling of GHG intensity of energy consumption is presented. • ANN modeling of energy consumption at the national level is presented. • GHG intensity concept was used for the estimation of energy-related GHG emissions. • The ANN models provide better results in comparison with conventional models. • Forecast of GHG emissions for 26 countries was made successfully with MAPE of 6.4%

  13. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  14. Efficient computation of Laguerre polynomials

    NARCIS (Netherlands)

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2017-01-01

    textabstractAn efficient algorithm and a Fortran 90 module (LaguerrePol) for computing Laguerre polynomials . Ln(α)(z) are presented. The standard three-term recurrence relation satisfied by the polynomials and different types of asymptotic expansions valid for . n large and . α small, are used

  15. Orthogonal polynomials in transport theories

    International Nuclear Information System (INIS)

    Dehesa, J.S.

    1981-01-01

    The asymptotical (k→infinity) behaviour of zeros of the polynomials gsub(k)sup((m)(ν)) encountered in the treatment of direct and inverse problems of scattering in neutron transport as well as radiative transfer theories is investigated in terms of the amplitude antiwsub(k) of the kth Legendre polynomial needed in the expansion of the scattering function. The parameters antiwsub(k) describe the anisotropy of scattering of the medium considered. In particular, it is shown that the asymptotical density of zeros of the polynomials gsub(k)sup(m)(ν) is an inverted semicircle for the anisotropic non-multiplying scattering medium

  16. Chromatic polynomials of random graphs

    International Nuclear Information System (INIS)

    Van Bussel, Frank; Fliegner, Denny; Timme, Marc; Ehrlich, Christoph; Stolzenberg, Sebastian

    2010-01-01

    Chromatic polynomials and related graph invariants are central objects in both graph theory and statistical physics. Computational difficulties, however, have so far restricted studies of such polynomials to graphs that were either very small, very sparse or highly structured. Recent algorithmic advances (Timme et al 2009 New J. Phys. 11 023001) now make it possible to compute chromatic polynomials for moderately sized graphs of arbitrary structure and number of edges. Here we present chromatic polynomials of ensembles of random graphs with up to 30 vertices, over the entire range of edge density. We specifically focus on the locations of the zeros of the polynomial in the complex plane. The results indicate that the chromatic zeros of random graphs have a very consistent layout. In particular, the crossing point, the point at which the chromatic zeros with non-zero imaginary part approach the real axis, scales linearly with the average degree over most of the density range. While the scaling laws obtained are purely empirical, if they continue to hold in general there are significant implications: the crossing points of chromatic zeros in the thermodynamic limit separate systems with zero ground state entropy from systems with positive ground state entropy, the latter an exception to the third law of thermodynamics.

  17. Link polynomial, crossing multiplier and surgery formula

    International Nuclear Information System (INIS)

    Deguchi, Tetsuo; Yamada, Yasuhiko.

    1989-01-01

    Relations between link polynomials constructed from exactly solvable lattice models and topological field theory are reviewed. It is found that the surgery formula for a three-sphere S 3 with Wilson lines corresponds to the Markov trace constructed from the exactly solvable models. This indicates that knot theory intimately relates various important subjects such as exactly solvable models, conformal field theories and topological quantum field theories. (author)

  18. New polynomial-based molecular descriptors with low degeneracy.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available In this paper, we introduce a novel graph polynomial called the 'information polynomial' of a graph. This graph polynomial can be derived by using a probability distribution of the vertex set. By using the zeros of the obtained polynomial, we additionally define some novel spectral descriptors. Compared with those based on computing the ordinary characteristic polynomial of a graph, we perform a numerical study using real chemical databases. We obtain that the novel descriptors do have a high discrimination power.

  19. Assessing the 2D Models of Geo-technological Variables in a Block of a Cuban Laterite Ore Body. Part IV Local Polynomial Method

    Directory of Open Access Journals (Sweden)

    Arístides Alejandro Legrá-Lobaina

    2016-10-01

    Full Text Available The local polynomial method is based on assuming that is possible to estimate the value of a U variable in any of the P coordinate through local polynomials estimated based on approximate data. This investigation analyzes the probability of modeling in two dimensions the thickness and nickel, iron and cobalt concentrations in a block of Cuban laterite ores by using the mentioned method. It was also analyzed if the results of modeling these variables depend on the estimation method that is used.

  20. Outlier detection algorithms for least squares time series regression

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Bent

    We review recent asymptotic results on some robust methods for multiple regression. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator Sat...

  1. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    Science.gov (United States)

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Ajuste de modelos de platô de resposta via regressão isotônica Response plateau models fitting via isotonic regression

    Directory of Open Access Journals (Sweden)

    Renata Pires Gonçalves

    2012-02-01

    . The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.

  3. Need for higher order polynomial basis for polynomial nodal methods employed in LWR calculations

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Palmiotti, G.

    1997-01-01

    The paper evaluates the accuracy and efficiency of sixth order polynomial solutions and the use of one radial node per core assembly for pressurized water reactor (PWR) core power distributions and reactivities. The computer code VARIANT was modified to calculate sixth order polynomial solutions for a hot zero power benchmark problem in which a control assembly along a core axis is assumed to be out of the core. Results are presented for the VARIANT, DIF3D-NODAL, and DIF3D-finite difference codes. The VARIANT results indicate that second order expansion of the within-node source and linear representation of the node surface currents are adequate for this problem. The results also demonstrate the improvement in the VARIANT solution when the order of the polynomial expansion of the within-node flux is increased from fourth to sixth order. There is a substantial saving in computational time for using one radial node per assembly with the sixth order expansion compared to using four or more nodes per assembly and fourth order polynomial solutions. 11 refs., 1 tab

  4. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  5. Sheffer and Non-Sheffer Polynomial Families

    Directory of Open Access Journals (Sweden)

    G. Dattoli

    2012-01-01

    Full Text Available By using the integral transform method, we introduce some non-Sheffer polynomial sets. Furthermore, we show how to compute the connection coefficients for particular expressions of Appell polynomials.

  6. BOX-COX transformation and random regression models for fecal egg count data

    Directory of Open Access Journals (Sweden)

    Marcos Vinicius Silva

    2012-01-01

    Full Text Available Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants fecal egg count (FEC is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used to achieve normality before analysis. However, the transformed data are often not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6,375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (covariance components. We also proposed using random regression models (RRM for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4 adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  7. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    Science.gov (United States)

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  8. Generalized Pseudospectral Method and Zeros of Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Oksana Bihun

    2018-01-01

    Full Text Available Via a generalization of the pseudospectral method for numerical solution of differential equations, a family of nonlinear algebraic identities satisfied by the zeros of a wide class of orthogonal polynomials is derived. The generalization is based on a modification of pseudospectral matrix representations of linear differential operators proposed in the paper, which allows these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for every polynomial family pνxν=0∞ orthogonal with respect to a measure supported on the real line that satisfies some standard assumptions, as long as the polynomials in the family satisfy differential equations Apν(x=qν(xpν(x, where A is a linear differential operator and each qν(x is a polynomial of degree at most n0∈N; n0 does not depend on ν. The proposed identities generalize known identities for classical and Krall orthogonal polynomials, to the case of the nonclassical orthogonal polynomials that belong to the class described above. The generalized pseudospectral representations of the differential operator A for the case of the Sonin-Markov orthogonal polynomials, also known as generalized Hermite polynomials, are presented. The general result is illustrated by new algebraic relations satisfied by the zeros of the Sonin-Markov polynomials.

  9. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  10. Macdonald polynomials from Sklyanin algebras: A conceptual basis for the p-adics-quantum group connection

    International Nuclear Information System (INIS)

    Freund, P.G.O.

    1992-01-01

    We establish a previously conjectured connection between p-adics and quantum groups. We find in Sklyanin's two parameter elliptic quantum algebra and its generalizations, the conceptual basis for the Macdonald polynomials, which 'interpolate' between the zonal spherical functions of related real and p-adic symmetric spaces. The elliptic quantum algebras underlie the Z n -Baxter models. We show that in the n→∞ limit, the Jost function for the scattering of first level excitations in the Z n -Baxter model coincides with the Harish-Chandra-like c-function constructed from the Macdonald polynomials associated to the root system A 1 . The partition function of the Z 2 -Baxter model itself is also expressed in terms of this Macdonald-Harish-Chandra c-function albeit in a less simple way. We relate the two parameters q and t of the Macdonald polynomials to the anisotropy and modular parameters of the Baxter model. In particular the p-acid 'regimes' in the Macdonald polynomials correspond to a discrete sequence of XXZ models. We also discuss the possibility of 'q-deforming' Euler products. (orig.)

  11. On the Connection Coefficients of the Chebyshev-Boubaker Polynomials

    Directory of Open Access Journals (Sweden)

    Paul Barry

    2013-01-01

    Full Text Available The Chebyshev-Boubaker polynomials are the orthogonal polynomials whose coefficient arrays are defined by ordinary Riordan arrays. Examples include the Chebyshev polynomials of the second kind and the Boubaker polynomials. We study the connection coefficients of this class of orthogonal polynomials, indicating how Riordan array techniques can lead to closed-form expressions for these connection coefficients as well as recurrence relations that define them.

  12. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion

    International Nuclear Information System (INIS)

    Oladyshkin, S.; Nowak, W.

    2012-01-01

    We discuss the arbitrary polynomial chaos (aPC), which has been subject of research in a few recent theoretical papers. Like all polynomial chaos expansion techniques, aPC approximates the dependence of simulation model output on model parameters by expansion in an orthogonal polynomial basis. The aPC generalizes chaos expansion techniques towards arbitrary distributions with arbitrary probability measures, which can be either discrete, continuous, or discretized continuous and can be specified either analytically (as probability density/cumulative distribution functions), numerically as histogram or as raw data sets. We show that the aPC at finite expansion order only demands the existence of a finite number of moments and does not require the complete knowledge or even existence of a probability density function. This avoids the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data. Alternatively, it allows modellers to choose freely of technical constraints the shapes of their statistical assumptions. Our key idea is to align the complexity level and order of analysis with the reliability and detail level of statistical information on the input parameters. We provide conditions for existence and clarify the relation of the aPC to statistical moments of model parameters. We test the performance of the aPC with diverse statistical distributions and with raw data. In these exemplary test cases, we illustrate the convergence with increasing expansion order and, for the first time, with increasing reliability level of statistical input information. Our results indicate that the aPC shows an exponential convergence rate and converges faster than classical polynomial chaos expansion techniques.

  13. A Posteriori Error Analysis of Stochastic Differential Equations Using Polynomial Chaos Expansions

    KAUST Repository

    Butler, T.; Dawson, C.; Wildey, T.

    2011-01-01

    We develop computable a posteriori error estimates for linear functionals of a solution to a general nonlinear stochastic differential equation with random model/source parameters. These error estimates are based on a variational analysis applied to stochastic Galerkin methods for forward and adjoint problems. The result is a representation for the error estimate as a polynomial in the random model/source parameter. The advantage of this method is that we use polynomial chaos representations for the forward and adjoint systems to cheaply produce error estimates by simple evaluation of a polynomial. By comparison, the typical method of producing such estimates requires repeated forward/adjoint solves for each new choice of random parameter. We present numerical examples showing that there is excellent agreement between these methods. © 2011 Society for Industrial and Applied Mathematics.

  14. Polynomial sequences generated by infinite Hessenberg matrices

    Directory of Open Access Journals (Sweden)

    Verde-Star Luis

    2017-01-01

    Full Text Available We show that an infinite lower Hessenberg matrix generates polynomial sequences that correspond to the rows of infinite lower triangular invertible matrices. Orthogonal polynomial sequences are obtained when the Hessenberg matrix is tridiagonal. We study properties of the polynomial sequences and their corresponding matrices which are related to recurrence relations, companion matrices, matrix similarity, construction algorithms, and generating functions. When the Hessenberg matrix is also Toeplitz the polynomial sequences turn out to be of interpolatory type and we obtain additional results. For example, we show that every nonderogative finite square matrix is similar to a unique Toeplitz-Hessenberg matrix.

  15. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  16. Special polynomials associated with rational solutions of some hierarchies

    International Nuclear Information System (INIS)

    Kudryashov, Nikolai A.

    2009-01-01

    New special polynomials associated with rational solutions of the Painleve hierarchies are introduced. The Hirota relations for these special polynomials are found. Differential-difference hierarchies to find special polynomials are presented. These formulae allow us to search special polynomials associated with the hierarchies. It is shown that rational solutions of the Caudrey-Dodd-Gibbon, the Kaup-Kupershmidt and the modified hierarchy for these ones can be obtained using new special polynomials.

  17. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    Directory of Open Access Journals (Sweden)

    F. Santonja

    2012-01-01

    Full Text Available Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model.

  18. A Non-Polynomial Gravity Formulation for Loop Quantum Cosmology Bounce

    Directory of Open Access Journals (Sweden)

    Stefano Chinaglia

    2017-09-01

    Full Text Available Recently the so-called mimetic gravity approach has been used to obtain corrections to the Friedmann equation of General Relativity similar to the ones present in loop quantum cosmology. In this paper, we propose an alternative way to derive this modified Friedmann equation via the so-called non-polynomial gravity approach, which consists of adding geometric non-polynomial higher derivative terms to Hilbert–Einstein action, which are nonetheless polynomials and lead to a second-order differential equation in Friedmann–Lemaître–Robertson–Walker space-times. Our explicit action turns out to be a realization of the Helling proposal of effective action with an infinite number of terms. The model is also investigated in the presence of a non-vanishing cosmological constant, and a new exact bounce solution is found and studied.

  19. Linear regression techniques for state-space models with application to biomedical/biochemical example

    NARCIS (Netherlands)

    Khairudin, N.; Keesman, K.J.

    2009-01-01

    In this paper a novel approach to estimate parameters in an LTI continuous-time statespace model is proposed. Essentially, the approach is based on a so-called pqR-decomposition of the numerator and denominator polynomials of the system’s transfer function. This approach allows the physical

  20. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  1. Cosmographic analysis with Chebyshev polynomials

    Science.gov (United States)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.

  2. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  3. Force prediction in cold rolling mills by polynomial methods

    Directory of Open Access Journals (Sweden)

    Nicu ROMAN

    2007-12-01

    Full Text Available A method for steel and aluminium strip thickness control is provided including a new technique for predictive rolling force estimation method by statistic model based on polynomial techniques.

  4. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  5. Relations between zeros of special polynomials associated with the Painleve equations

    International Nuclear Information System (INIS)

    Kudryashov, Nikolai A.; Demina, Maria V.

    2007-01-01

    A method for finding relations of roots of polynomials is presented. Our approach allows us to get a number of relations between the zeros of the classical polynomials as well as the roots of special polynomials associated with rational solutions of the Painleve equations. We apply the method to obtain the relations for the zeros of several polynomials. These are: the Hermite polynomials, the Laguerre polynomials, the Yablonskii-Vorob'ev polynomials, the generalized Okamoto polynomials, and the generalized Hermite polynomials. All the relations found can be considered as analogues of generalized Stieltjes relations

  6. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    Science.gov (United States)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  7. Polynomial Chaos Characterization of Uncertainty in Multiscale Models and Behavior of Carbon Reinforced Composites

    Energy Technology Data Exchange (ETDEWEB)

    Mehrez, Loujaine [University of Southern California; Ghanem, Roger [University of Southern California; Aitharaju, Venkat [General Motors; Rodgers, William [General Motors

    2017-10-23

    Design of non-crimp fabric (NCF) composites entails major challenges pertaining to (1) the complex fine-scale morphology of the constituents, (2) the manufacturing-produced inconsistency of this morphology spatially, and thus (3) the ability to build reliable, robust, and efficient computational surrogate models to account for this complex nature. Traditional approaches to construct computational surrogate models have been to average over the fluctuations of the material properties at different scale lengths. This fails to account for the fine-scale features and fluctuations in morphology, material properties of the constituents, as well as fine-scale phenomena such as damage and cracks. In addition, it fails to accurately predict the scatter in macroscopic properties, which is vital to the design process and behavior prediction. In this work, funded in part by the Department of Energy, we present an approach for addressing these challenges by relying on polynomial chaos representations of both input parameters and material properties at different scales. Moreover, we emphasize the efficiency and robustness of integrating the polynomial chaos expansion with multiscale tools to perform multiscale assimilation, characterization, propagation, and prediction, all of which are necessary to construct the data-driven surrogate models required to design under the uncertainty of composites. These data-driven constructions provide an accurate map from parameters (and their uncertainties) at all scales and the system-level behavior relevant for design. While this perspective is quite general and applicable to all multiscale systems, NCF composites present a particular hierarchy of scales that permits the efficient implementation of these concepts.

  8. On polynomial solutions of the Heun equation

    International Nuclear Information System (INIS)

    Gurappa, N; Panigrahi, Prasanta K

    2004-01-01

    By making use of a recently developed method to solve linear differential equations of arbitrary order, we find a wide class of polynomial solutions to the Heun equation. We construct the series solution to the Heun equation before identifying the polynomial solutions. The Heun equation extended by the addition of a term, -σ/x, is also amenable for polynomial solutions. (letter to the editor)

  9. A new Arnoldi approach for polynomial eigenproblems

    Energy Technology Data Exchange (ETDEWEB)

    Raeven, F.A.

    1996-12-31

    In this paper we introduce a new generalization of the method of Arnoldi for matrix polynomials. The new approach is compared with the approach of rewriting the polynomial problem into a linear eigenproblem and applying the standard method of Arnoldi to the linearised problem. The algorithm that can be applied directly to the polynomial eigenproblem turns out to be more efficient, both in storage and in computation.

  10. A New Six-Parameter Model Based on Chebyshev Polynomials for Solar Cells

    Directory of Open Access Journals (Sweden)

    Shu-xian Lun

    2015-01-01

    Full Text Available This paper presents a new current-voltage (I-V model for solar cells. It has been proved that series resistance of a solar cell is related to temperature. However, the existing five-parameter model ignores the temperature dependence of series resistance and then only accurately predicts the performance of monocrystalline silicon solar cells. Therefore, this paper uses Chebyshev polynomials to describe the relationship between series resistance and temperature. This makes a new parameter called temperature coefficient for series resistance introduced into the single-diode model. Then, a new six-parameter model for solar cells is established in this paper. This new model can improve the accuracy of the traditional single-diode model and reflect the temperature dependence of series resistance. To validate the accuracy of the six-parameter model in this paper, five kinds of silicon solar cells with different technology types, that is, monocrystalline silicon, polycrystalline silicon, thin film silicon, and tripe-junction amorphous silicon, are tested at different irradiance and temperature conditions. Experiment results show that the six-parameter model proposed in this paper is an I-V model with moderate computational complexity and high precision.

  11. Regression Models For Multivariate Count Data.

    Science.gov (United States)

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  12. Orthogonal Polynomials and Special Functions

    CERN Document Server

    Assche, Walter

    2003-01-01

    The set of lectures from the Summer School held in Leuven in 2002 provide an up-to-date account of recent developments in orthogonal polynomials and special functions, in particular for algorithms for computer algebra packages, 3nj-symbols in representation theory of Lie groups, enumeration, multivariable special functions and Dunkl operators, asymptotics via the Riemann-Hilbert method, exponential asymptotics and the Stokes phenomenon. The volume aims at graduate students and post-docs working in the field of orthogonal polynomials and special functions, and in related fields interacting with orthogonal polynomials, such as combinatorics, computer algebra, asymptotics, representation theory, harmonic analysis, differential equations, physics. The lectures are self-contained requiring only a basic knowledge of analysis and algebra, and each includes many exercises.

  13. Multiple Meixner polynomials and non-Hermitian oscillator Hamiltonians

    International Nuclear Information System (INIS)

    Ndayiragije, F; Van Assche, W

    2013-01-01

    Multiple Meixner polynomials are polynomials in one variable which satisfy orthogonality relations with respect to r > 1 different negative binomial distributions (Pascal distributions). There are two kinds of multiple Meixner polynomials, depending on the selection of the parameters in the negative binomial distribution. We recall their definition and some formulas and give generating functions and explicit expressions for the coefficients in the nearest neighbor recurrence relation. Following a recent construction of Miki, Tsujimoto, Vinet and Zhedanov (for multiple Meixner polynomials of the first kind), we construct r > 1 non-Hermitian oscillator Hamiltonians in r dimensions which are simultaneously diagonalizable and for which the common eigenstates are expressed in terms of multiple Meixner polynomials of the second kind. (paper)

  14. Colouring and knot polynomials

    International Nuclear Information System (INIS)

    Welsh, D.J.A.

    1991-01-01

    These lectures will attempt to explain a connection between the recent advances in knot theory using the Jones and related knot polynomials with classical problems in combinatorics and statistical mechanics. The difficulty of some of these problems will be analysed in the context of their computational complexity. In particular we shall discuss colourings and groups valued flows in graphs, knots and the Jones and Kauffman polynomials, the Ising, Potts and percolation problems of statistical physics, computational complexity of the above problems. (author). 20 refs, 9 figs

  15. Uniqueness and zeros of q-shift difference polynomials

    Indian Academy of Sciences (India)

    In this paper, we consider the zero distributions of -shift difference polynomials of meromorphic functions with zero order, and obtain two theorems that extend the classical Hayman results on the zeros of differential polynomials to -shift difference polynomials. We also investigate the uniqueness problem of -shift ...

  16. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  17. Factoring polynomials over arbitrary finite fields

    NARCIS (Netherlands)

    Lange, T.; Winterhof, A.

    2000-01-01

    We analyse an extension of Shoup's (Inform. Process. Lett. 33 (1990) 261–267) deterministic algorithm for factoring polynomials over finite prime fields to arbitrary finite fields. In particular, we prove the existence of a deterministic algorithm which completely factors all monic polynomials of

  18. Additive and polynomial representations

    CERN Document Server

    Krantz, David H; Suppes, Patrick

    1971-01-01

    Additive and Polynomial Representations deals with major representation theorems in which the qualitative structure is reflected as some polynomial function of one or more numerical functions defined on the basic entities. Examples are additive expressions of a single measure (such as the probability of disjoint events being the sum of their probabilities), and additive expressions of two measures (such as the logarithm of momentum being the sum of log mass and log velocity terms). The book describes the three basic procedures of fundamental measurement as the mathematical pivot, as the utiliz

  19. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  20. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  1. Refined toric branes, surface operators and factorization of generalized Macdonald polynomials

    Science.gov (United States)

    Zenkevich, Yegor

    2017-09-01

    We find new universal factorization identities for generalized Macdonald polynomials on the topological locus. We prove the identities (which include all previously known forumlas of this kind) using factorization identities for matrix model averages, which are themselves consequences of Ding-Iohara-Miki constraints. Factorized expressions for generalized Macdonald polynomials are identified with refined topological string amplitudes containing a toric brane on an intermediate preferred leg, surface operators in gauge theory and certain degenerate CFT vertex operators.

  2. A Determinant Expression for the Generalized Bessel Polynomials

    Directory of Open Access Journals (Sweden)

    Sheng-liang Yang

    2013-01-01

    Full Text Available Using the exponential Riordan arrays, we show that a variation of the generalized Bessel polynomial sequence is of Sheffer type, and we obtain a determinant formula for the generalized Bessel polynomials. As a result, the Bessel polynomial is represented as determinant the entries of which involve Catalan numbers.

  3. A generalization of the Bernoulli polynomials

    Directory of Open Access Journals (Sweden)

    Pierpaolo Natalini

    2003-01-01

    Full Text Available A generalization of the Bernoulli polynomials and, consequently, of the Bernoulli numbers, is defined starting from suitable generating functions. Furthermore, the differential equations of these new classes of polynomials are derived by means of the factorization method introduced by Infeld and Hull (1951.

  4. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  5. Information-theoretic lengths of Jacobi polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, A; Dehesa, J S [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, Granada (Spain); Sanchez-Moreno, P, E-mail: agmartinez@ugr.e, E-mail: pablos@ugr.e, E-mail: dehesa@ugr.e [Instituto ' Carlos I' de Fisica Teorica y Computacional, Universidad de Granada, Granada (Spain)

    2010-07-30

    The information-theoretic lengths of the Jacobi polynomials P{sup ({alpha}, {beta})}{sub n}(x), which are information-theoretic measures (Renyi, Shannon and Fisher) of their associated Rakhmanov probability density, are investigated. They quantify the spreading of the polynomials along the orthogonality interval [- 1, 1] in a complementary but different way as the root-mean-square or standard deviation because, contrary to this measure, they do not refer to any specific point of the interval. The explicit expressions of the Fisher length are given. The Renyi lengths are found by the use of the combinatorial multivariable Bell polynomials in terms of the polynomial degree n and the parameters ({alpha}, {beta}). The Shannon length, which cannot be exactly calculated because of its logarithmic functional form, is bounded from below by using sharp upper bounds to general densities on [- 1, +1] given in terms of various expectation values; moreover, its asymptotics is also pointed out. Finally, several computational issues relative to these three quantities are carefully analyzed.

  6. Transversals of Complex Polynomial Vector Fields

    DEFF Research Database (Denmark)

    Dias, Kealey

    Vector fields in the complex plane are defined by assigning the vector determined by the value P(z) to each point z in the complex plane, where P is a polynomial of one complex variable. We consider special families of so-called rotated vector fields that are determined by a polynomial multiplied...... by rotational constants. Transversals are a certain class of curves for such a family of vector fields that represent the bifurcation states for this family of vector fields. More specifically, transversals are curves that coincide with a homoclinic separatrix for some rotation of the vector field. Given...... a concrete polynomial, it seems to take quite a bit of work to prove that it is generic, i.e. structurally stable. This has been done for a special class of degree d polynomial vector fields having simple equilibrium points at the d roots of unity, d odd. In proving that such vector fields are generic...

  7. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Directory of Open Access Journals (Sweden)

    Jin Jeong-Hee

    2010-01-01

    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  8. The modified Gauss diagonalization of polynomial matrices

    International Nuclear Information System (INIS)

    Saeed, K.

    1982-10-01

    The Gauss algorithm for diagonalization of constant matrices is modified for application to polynomial matrices. Due to this modification the diagonal elements become pure polynomials rather than rational functions. (author)

  9. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Science.gov (United States)

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  10. Exceptional polynomials and SUSY quantum mechanics

    Indian Academy of Sciences (India)

    Abstract. We show that for the quantum mechanical problem which admit classical Laguerre/. Jacobi polynomials as solutions for the Schrödinger equations (SE), will also admit exceptional. Laguerre/Jacobi polynomials as solutions having the same eigenvalues but with the ground state missing after a modification of the ...

  11. A companion matrix for 2-D polynomials

    International Nuclear Information System (INIS)

    Boudellioua, M.S.

    1995-08-01

    In this paper, a matrix form analogous to the companion matrix which is often encountered in the theory of one dimensional (1-D) linear systems is suggested for a class of polynomials in two indeterminates and real coefficients, here referred to as two dimensional (2-D) polynomials. These polynomials arise in the context of 2-D linear systems theory. Necessary and sufficient conditions are also presented under which a matrix is equivalent to this companion form. (author). 6 refs

  12. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  13. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  14. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  15. Polynomial asymptotic stability of damped stochastic differential equations

    Directory of Open Access Journals (Sweden)

    John Appleby

    2004-08-01

    Full Text Available The paper studies the polynomial convergence of solutions of a scalar nonlinear It\\^{o} stochastic differential equation\\[dX(t = -f(X(t\\,dt + \\sigma(t\\,dB(t\\] where it is known, {\\it a priori}, that $\\lim_{t\\rightarrow\\infty} X(t=0$, a.s. The intensity of the stochastic perturbation $\\sigma$ is a deterministic, continuous and square integrable function, which tends to zero more quickly than a polynomially decaying function. The function $f$ obeys $\\lim_{x\\rightarrow 0}\\mbox{sgn}(xf(x/|x|^\\beta = a$, for some $\\beta>1$, and $a>0$.We study two asymptotic regimes: when $\\sigma$ tends to zero sufficiently quickly the polynomial decay rate of solutions is the same as for the deterministic equation (when $\\sigma\\equiv0$. When $\\sigma$ decays more slowly, a weaker almost sure polynomial upper bound on the decay rate of solutions is established. Results which establish the necessity for $\\sigma$ to decay polynomially in order to guarantee the almost sure polynomial decay of solutions are also proven.

  16. Degenerate r-Stirling Numbers and r-Bell Polynomials

    Science.gov (United States)

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.

    2018-01-01

    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  17. Commutators with idempotent values on multilinear polynomials in ...

    Indian Academy of Sciences (India)

    Multilinear polynomial; derivations; generalized polynomial identity; prime ring; right ideal. Abstract. Let R be a prime ring of characteristic different from 2, C its extended centroid, d a nonzero derivation of R , f ( x 1 , … , x n ) a multilinear polynomial over C , ϱ a nonzero right ideal of R and m > 1 a fixed integer such that.

  18. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  19. The generalized Yablonskii-Vorob'ev polynomials and their properties

    International Nuclear Information System (INIS)

    Kudryashov, Nikolai A.; Demina, Maria V.

    2008-01-01

    Rational solutions of the generalized second Painleve hierarchy are classified. Representation of the rational solutions in terms of special polynomials, the generalized Yablonskii-Vorob'ev polynomials, is introduced. Differential-difference relations satisfied by the polynomials are found. Hierarchies of differential equations related to the generalized second Painleve hierarchy are derived. One of these hierarchies is a sequence of differential equations satisfied by the generalized Yablonskii-Vorob'ev polynomials

  20. 2-variable Laguerre matrix polynomials and Lie-algebraic techniques

    International Nuclear Information System (INIS)

    Khan, Subuhi; Hassan, Nader Ali Makboul

    2010-01-01

    The authors introduce 2-variable forms of Laguerre and modified Laguerre matrix polynomials and derive their special properties. Further, the representations of the special linear Lie algebra sl(2) and the harmonic oscillator Lie algebra G(0,1) are used to derive certain results involving these polynomials. Furthermore, the generating relations for the ordinary as well as matrix polynomials related to these matrix polynomials are derived as applications.

  1. Lokální polynomická regrese

    OpenAIRE

    Cigán, Martin

    2015-01-01

    This thesis examines local polynomial regression. Local polynomial regression is one of non-parametric approach of data fitting. This particular method is based on repetition of fitting data using weighted least squares estimate of the parameters of the polynomial model. The aim of this thesis is therefore revision of some properties of the weighted least squares estimate used in linear regression model and introduction of the non-robust method of local polynomial regression. Some statistical...

  2. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  3. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  4. Describing Quadratic Cremer Point Polynomials by Parabolic Perturbations

    DEFF Research Database (Denmark)

    Sørensen, Dan Erik Krarup

    1996-01-01

    We describe two infinite order parabolic perturbation proceduresyielding quadratic polynomials having a Cremer fixed point. The main ideais to obtain the polynomial as the limit of repeated parabolic perturbations.The basic tool at each step is to control the behaviour of certain externalrays.......Polynomials of the Cremer type correspond to parameters at the boundary of ahyperbolic component of the Mandelbrot set. In this paper we concentrate onthe main cardioid component. We investigate the differences between two-sided(i.e. alternating) and one-sided parabolic perturbations.In the two-sided case, we prove...... the existence of polynomials having an explicitlygiven external ray accumulating both at the Cremer point and at its non-periodicpreimage. We think of the Julia set as containing a "topologists double comb".In the one-sided case we prove a weaker result: the existence of polynomials havingan explicitly given...

  5. Orthogonal polynomials derived from the tridiagonal representation approach

    Science.gov (United States)

    Alhaidari, A. D.

    2018-01-01

    The tridiagonal representation approach is an algebraic method for solving second order differential wave equations. Using this approach in the solution of quantum mechanical problems, we encounter two new classes of orthogonal polynomials whose properties give the structure and dynamics of the corresponding physical system. For a certain range of parameters, one of these polynomials has a mix of continuous and discrete spectra making it suitable for describing physical systems with both scattering and bound states. In this work, we define these polynomials by their recursion relations and highlight some of their properties using numerical means. Due to the prime significance of these polynomials in physics, we hope that our short expose will encourage experts in the field of orthogonal polynomials to study them and derive their properties (weight functions, generating functions, asymptotics, orthogonality relations, zeros, etc.) analytically.

  6. A note on some identities of derangement polynomials.

    Science.gov (United States)

    Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo; Kwon, Jongkyum

    2018-01-01

    The problem of counting derangements was initiated by Pierre Rémond de Montmort in 1708 (see Carlitz in Fibonacci Q. 16(3):255-258, 1978, Clarke and Sved in Math. Mag. 66(5):299-303, 1993, Kim, Kim and Kwon in Adv. Stud. Contemp. Math. (Kyungshang) 28(1):1-11 2018. A derangement is a permutation that has no fixed points, and the derangement number [Formula: see text] is the number of fixed-point-free permutations on an n element set. In this paper, we study the derangement polynomials and investigate some interesting properties which are related to derangement numbers. Also, we study two generalizations of derangement polynomials, namely higher-order and r -derangement polynomials, and show some relations between them. In addition, we express several special polynomials in terms of the higher-order derangement polynomials by using umbral calculus.

  7. Testing homogeneity in Weibull-regression models.

    Science.gov (United States)

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  8. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia; Rue, Haavard

    2018-01-01

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite

  9. Detection of epistatic effects with logic regression and a classical linear regression model.

    Science.gov (United States)

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  10. Polynomial solutions of the Monge-Ampère equation

    Energy Technology Data Exchange (ETDEWEB)

    Aminov, Yu A [B.Verkin Institute for Low Temperature Physics and Engineering, National Academy of Sciences of Ukraine, Khar' kov (Ukraine)

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction of such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.

  11. Bioeconomic of profit maximization of red tilapia (Oreochromis sp.) culture using polynomial growth model

    Science.gov (United States)

    Wijayanto, D.; Kurohman, F.; Nugroho, RA

    2018-03-01

    The research purpose was to develop a model bioeconomic of profit maximization that can be applied to red tilapia culture. The development of fish growth model used polynomial growth function. Profit maximization process used the first derivative of profit equation to time of culture equal to zero. This research has also developed the equations to estimate the culture time to reach the target size of the fish harvest. The research proved that this research model could be applied in the red tilapia culture. In the case of this study, red tilapia culture can achieve the maximum profit at 584 days and the profit of Rp. 28,605,731 per culture cycle. If used size target of 250 g, the culture of red tilapia need 82 days of culture time.

  12. Comparison of autoregressive (AR) strategy with that of regression approach for determining ozone layer depletion as a physical process

    International Nuclear Information System (INIS)

    Yousufzai, M.A.K; Aansari, M.R.K.; Quamar, J.; Iqbal, J.; Hussain, M.A.

    2010-01-01

    This communication presents the development of a comprehensive characterization of ozone layer depletion (OLD) phenomenon as a physical process in the form of mathematical models that comprise the usual regression, multiple or polynomial regression and stochastic strategy. The relevance of these models has been illuminated using predicted values of different parameters under a changing environment. The information obtained from such analysis can be employed to alter the possible factors and variables to achieve optimum performance. This kind of analysis initiates a study towards formulating the phenomenon of OLD as a physical process with special reference to the stratospheric region of Pakistan. The data presented here establishes that the Auto regressive (AR) nature of modeling OLD as a physical process is an appropriate scenario rather than using usual regression. The data reported in literature suggest quantitatively the OLD is occurring in our region. For this purpose we have modeled this phenomenon using the data recorded at the Geophysical Centre Quetta during the period 1960-1999. The predictions made by this analysis are useful for public, private and other relevant organizations. (author)

  13. Multidimensional Gravitational Models: Fluxbrane and S-Brane Solutions with Polynomials

    International Nuclear Information System (INIS)

    Ivashchuk, V. D.; Melnikov, V. N.

    2007-01-01

    Main results in obtaining exact solutions for multidimensional models and their application to solving main problems of modern cosmology and black hole physics are described. Some new results on composite fluxbrane and S-brane solutions for a wide class of intersection rules are presented. These solutions are defined on a product manifold R* x M1 x ... x Mn which contains n Ricci-flat spaces M1,...,Mn with 1-dimensional R* and M1. They are defined up to a set of functions obeying non-linear differential equations equivalent to Toda-type equations with certain boundary conditions imposed. Exact solutions corresponding to configurations with two branes and intersections related to simple Lie algebras C2 and G2 are obtained. In these cases the functions Hs(z), s = 1, 2, are polynomials of degrees: (3, 4) and (6, 10), respectively, in agreement with a conjecture suggested earlier. Examples of simple S-brane solutions describing an accelerated expansion of a certain factor-space are given explicitely

  14. Zeros and uniqueness of Q-difference polynomials of meromorphic ...

    Indian Academy of Sciences (India)

    Meromorphic functions; Nevanlinna theory; logarithmic order; uniqueness problem; difference-differential polynomial. Abstract. In this paper, we investigate the value distribution of -difference polynomials of meromorphic function of finite logarithmic order, and study the zero distribution of difference-differential polynomials ...

  15. AN APPLICATION OF FUNCTIONAL MULTIVARIATE REGRESSION MODEL TO MULTICLASS CLASSIFICATION

    OpenAIRE

    Krzyśko, Mirosław; Smaga, Łukasz

    2017-01-01

    In this paper, the scale response functional multivariate regression model is considered. By using the basis functions representation of functional predictors and regression coefficients, this model is rewritten as a multivariate regression model. This representation of the functional multivariate regression model is used for multiclass classification for multivariate functional data. Computational experiments performed on real labelled data sets demonstrate the effectiveness of the proposed ...

  16. Laguerre polynomials by a harmonic oscillator

    Science.gov (United States)

    Baykal, Melek; Baykal, Ahmet

    2014-09-01

    The study of an isotropic harmonic oscillator, using the factorization method given in Ohanian's textbook on quantum mechanics, is refined and some collateral extensions of the method related to the ladder operators and the associated Laguerre polynomials are presented. In particular, some analytical properties of the associated Laguerre polynomials are derived using the ladder operators.

  17. Julia Sets of Orthogonal Polynomials

    DEFF Research Database (Denmark)

    Christiansen, Jacob Stordal; Henriksen, Christian; Petersen, Henrik Laurberg

    2018-01-01

    For a probability measure with compact and non-polar support in the complex plane we relate dynamical properties of the associated sequence of orthogonal polynomials fPng to properties of the support. More precisely we relate the Julia set of Pn to the outer boundary of the support, the lled Julia...... set to the polynomial convex hull K of the support, and the Green's function associated with Pn to the Green's function for the complement of K....

  18. An introduction to orthogonal polynomials

    CERN Document Server

    Chihara, Theodore S

    1978-01-01

    Assuming no further prerequisites than a first undergraduate course in real analysis, this concise introduction covers general elementary theory related to orthogonal polynomials. It includes necessary background material of the type not usually found in the standard mathematics curriculum. Suitable for advanced undergraduate and graduate courses, it is also appropriate for independent study. Topics include the representation theorem and distribution functions, continued fractions and chain sequences, the recurrence formula and properties of orthogonal polynomials, special functions, and some

  19. Imaging characteristics of Zernike and annular polynomial aberrations.

    Science.gov (United States)

    Mahajan, Virendra N; Díaz, José Antonio

    2013-04-01

    The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.

  20. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  1. Orthogonal polynomials and random matrices

    CERN Document Server

    Deift, Percy

    2000-01-01

    This volume expands on a set of lectures held at the Courant Institute on Riemann-Hilbert problems, orthogonal polynomials, and random matrix theory. The goal of the course was to prove universality for a variety of statistical quantities arising in the theory of random matrix models. The central question was the following: Why do very general ensembles of random n {\\times} n matrices exhibit universal behavior as n {\\rightarrow} {\\infty}? The main ingredient in the proof is the steepest descent method for oscillatory Riemann-Hilbert problems.

  2. Polynomial solutions of nonlinear integral equations

    International Nuclear Information System (INIS)

    Dominici, Diego

    2009-01-01

    We analyze the polynomial solutions of a nonlinear integral equation, generalizing the work of Bender and Ben-Naim (2007 J. Phys. A: Math. Theor. 40 F9, 2008 J. Nonlinear Math. Phys. 15 (Suppl. 3) 73). We show that, in some cases, an orthogonal solution exists and we give its general form in terms of kernel polynomials

  3. Polynomial solutions of nonlinear integral equations

    Energy Technology Data Exchange (ETDEWEB)

    Dominici, Diego [Department of Mathematics, State University of New York at New Paltz, 1 Hawk Dr. Suite 9, New Paltz, NY 12561-2443 (United States)], E-mail: dominicd@newpaltz.edu

    2009-05-22

    We analyze the polynomial solutions of a nonlinear integral equation, generalizing the work of Bender and Ben-Naim (2007 J. Phys. A: Math. Theor. 40 F9, 2008 J. Nonlinear Math. Phys. 15 (Suppl. 3) 73). We show that, in some cases, an orthogonal solution exists and we give its general form in terms of kernel polynomials.

  4. Laguerre polynomials by a harmonic oscillator

    International Nuclear Information System (INIS)

    Baykal, Melek; Baykal, Ahmet

    2014-01-01

    The study of an isotropic harmonic oscillator, using the factorization method given in Ohanian's textbook on quantum mechanics, is refined and some collateral extensions of the method related to the ladder operators and the associated Laguerre polynomials are presented. In particular, some analytical properties of the associated Laguerre polynomials are derived using the ladder operators. (paper)

  5. Remarks on determinants and the classical polynomials

    International Nuclear Information System (INIS)

    Henning, J.J.; Kranold, H.U.; Louw, D.F.B.

    1986-01-01

    As motivation for this formal analysis the problem of Landau damping of Bernstein modes is discussed. It is shown that in the case of a weak but finite constant external magnetic field, the analytical structure of the dispersion relations is of such a nature that longitudinal waves propagating orthogonal to the external magnetic field are also damped, contrary to normal belief. In the treatment of the linearized Vlasov equation it is found convenient to generate certain polynomials by the problem at hand and to explicitly write down expressions for these polynomials. In the course of this study methods are used that relate to elementary but fairly unknown functional relationships between power sums and coefficients of polynomials. These relationships, also called Waring functions, are derived. They are then used in other applications to give explicit expressions for the generalized Laguerre polynomials in terms of determinant functions. The properties of polynomials generated by a wide class of generating functions are investigated. These relationships are also used to obtain explicit forms for the cumulants of a distribution in terms of its moments. It is pointed out that cumulants (or moments, for that matter) do not determine a distribution function

  6. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging ..... 10 kriging models with different parameters were also obtained. ..... shapes using stochastic optimization methods and com-.

  7. General quantum polynomials: irreducible modules and Morita equivalence

    International Nuclear Information System (INIS)

    Artamonov, V A

    1999-01-01

    In this paper we continue the investigation of the structure of finitely generated modules over rings of general quantum (Laurent) polynomials. We obtain a description of the lattice of submodules of periodic finitely generated modules and describe the irreducible modules. We investigate the problem of Morita equivalence of rings of general quantum polynomials, consider properties of division rings of fractions, and solve Zariski's problem for quantum polynomials

  8. Multiple Meixner polynomials and non-Hermitian oscillator Hamiltonians

    OpenAIRE

    Ndayiragije, François; Van Assche, Walter

    2013-01-01

    Multiple Meixner polynomials are polynomials in one variable which satisfy orthogonality relations with respect to $r>1$ different negative binomial distributions (Pascal distributions). There are two kinds of multiple Meixner polynomials, depending on the selection of the parameters in the negative binomial distribution. We recall their definition and some formulas and give generating functions and explicit expressions for the coefficients in the nearest neighbor recurrence relation. Followi...

  9. Multivariable biorthogonal continuous--discrete Wilson and Racah polynomials

    International Nuclear Information System (INIS)

    Tratnik, M.V.

    1990-01-01

    Several families of multivariable, biorthogonal, partly continuous and partly discrete, Wilson polynomials are presented. These yield limit cases that are purely continuous in some of the variables and purely discrete in the others, or purely discrete in all the variables. The latter are referred to as the multivariable biorthogonal Racah polynomials. Interesting further limit cases include the multivariable biorthogonal Hahn and dual Hahn polynomials

  10. Primitive polynomials selection method for pseudo-random number generator

    Science.gov (United States)

    Anikin, I. V.; Alnajjar, Kh

    2018-01-01

    In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

  11. The finite Fourier transform of classical polynomials

    OpenAIRE

    Dixit, Atul; Jiu, Lin; Moll, Victor H.; Vignat, Christophe

    2014-01-01

    The finite Fourier transform of a family of orthogonal polynomials $A_{n}(x)$, is the usual transform of the polynomial extended by $0$ outside their natural domain. Explicit expressions are given for the Legendre, Jacobi, Gegenbauer and Chebyshev families.

  12. Application of grafted polynomial function in forecasting cotton ...

    African Journals Online (AJOL)

    A study was conducted to forecast cotton production trend with the application of a grafted polynomial function in Nigeria from 1985 through 2013. Grafted models are used in econometrics to embark on economic analysis involving time series. In economic time series, the paucity of data and their availability has always ...

  13. Model-based multi-fringe interferometry using Zernike polynomials

    Science.gov (United States)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  14. Discrete series representations for sl(2|1), Meixner polynomials and oscillator models

    International Nuclear Information System (INIS)

    Jafarov, E I; Van der Jeugt, J

    2012-01-01

    We explore a model for a one-dimensional quantum oscillator based on the Lie superalgebra sl(2|1). For this purpose, a class of discrete series representations of sl(2|1) is constructed, each representation characterized by a real number β > 0. In this model, the position and momentum operators of the oscillator are odd elements of sl(2|1) and their expressions involve an arbitrary parameter γ. In each representation, the spectrum of the Hamiltonian is the same as that of a canonical oscillator. The spectrum of a position operator can be continuous or infinite discrete, depending on the value of γ. We determine the position wavefunctions both in the continuous and the discrete case and discuss their properties. In the discrete case, these wavefunctions are given in terms of Meixner polynomials. From the embedding osp(1|2) subset of sl(2|1), it can be seen why the case γ = 1 corresponds to a paraboson oscillator. Consequently, taking the values (β, γ) = (1/2, 1) in the sl(2|1) model yields a canonical oscillator. (paper)

  15. Algebraic limit cycles in polynomial systems of differential equations

    International Nuclear Information System (INIS)

    Llibre, Jaume; Zhao Yulin

    2007-01-01

    Using elementary tools we construct cubic polynomial systems of differential equations with algebraic limit cycles of degrees 4, 5 and 6. We also construct a cubic polynomial system of differential equations having an algebraic homoclinic loop of degree 3. Moreover, we show that there are polynomial systems of differential equations of arbitrary degree that have algebraic limit cycles of degree 3, as well as give an example of a cubic polynomial system of differential equations with two algebraic limit cycles of degree 4

  16. From sequences to polynomials and back, via operator orderings

    Energy Technology Data Exchange (ETDEWEB)

    Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu [Department of Mathematics, Tulane University, New Orleans, Louisiana 70118 (United States); De Angelis, Valerio, E-mail: vdeangel@xula.edu [Department of Mathematics, Xavier University of Louisiana, New Orleans, Louisiana 70125 (United States); Vignat, Christophe, E-mail: vignat@tulane.edu [Department of Mathematics, Tulane University, New Orleans, Louisiana 70118, USA and L.S.S. Supelec, Universite d' Orsay (France)

    2013-12-15

    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  17. Connection coefficients between Boas-Buck polynomial sets

    Science.gov (United States)

    Cheikh, Y. Ben; Chaggara, H.

    2006-07-01

    In this paper, a general method to express explicitly connection coefficients between two Boas-Buck polynomial sets is presented. As application, we consider some generalized hypergeometric polynomials, from which we derive some well-known results including duplication and inversion formulas.

  18. Alternative regression models to assess increase in childhood BMI.

    Science.gov (United States)

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-09-08

    Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  19. Least squares orthogonal polynomial approximation in several independent variables

    International Nuclear Information System (INIS)

    Caprari, R.S.

    1992-06-01

    This paper begins with an exposition of a systematic technique for generating orthonormal polynomials in two independent variables by application of the Gram-Schmidt orthogonalization procedure of linear algebra. It is then demonstrated how a linear least squares approximation for experimental data or an arbitrary function can be generated from these polynomials. The least squares coefficients are computed without recourse to matrix arithmetic, which ensures both numerical stability and simplicity of implementation as a self contained numerical algorithm. The Gram-Schmidt procedure is then utilised to generate a complete set of orthogonal polynomials of fourth degree. A theory for the transformation of the polynomial representation from an arbitrary basis into the familiar sum of products form is presented, together with a specific implementation for fourth degree polynomials. Finally, the computational integrity of this algorithm is verified by reconstructing arbitrary fourth degree polynomials from their values at randomly chosen points in their domain. 13 refs., 1 tab

  20. Diffusion Coefficient Calculations With Low Order Legendre Polynomial and Chebyshev Polynomial Approximation for the Transport Equation in Spherical Geometry

    International Nuclear Information System (INIS)

    Yasa, F.; Anli, F.; Guengoer, S.

    2007-01-01

    We present analytical calculations of spherically symmetric radioactive transfer and neutron transport using a hypothesis of P1 and T1 low order polynomial approximation for diffusion coefficient D. Transport equation in spherical geometry is considered as the pseudo slab equation. The validity of polynomial expansionion in transport theory is investigated through a comparison with classic diffusion theory. It is found that for causes when the fluctuation of the scattering cross section dominates, the quantitative difference between the polynomial approximation and diffusion results was physically acceptable in general

  1. Alternative regression models to assess increase in childhood BMI

    OpenAIRE

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-01-01

    Abstract Background Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 childre...

  2. On Roots of Polynomials and Algebraically Closed Fields

    Directory of Open Access Journals (Sweden)

    Schwarzweller Christoph

    2017-10-01

    Full Text Available In this article we further extend the algebraic theory of polynomial rings in Mizar [1, 2, 3]. We deal with roots and multiple roots of polynomials and show that both the real numbers and finite domains are not algebraically closed [5, 7]. We also prove the identity theorem for polynomials and that the number of multiple roots is bounded by the polynomial’s degree [4, 6].

  3. Topological string partition functions as polynomials

    International Nuclear Information System (INIS)

    Yamaguchi, Satoshi; Yau Shingtung

    2004-01-01

    We investigate the structure of the higher genus topological string amplitudes on the quintic hypersurface. It is shown that the partition functions of the higher genus than one can be expressed as polynomials of five generators. We also compute the explicit polynomial forms of the partition functions for genus 2, 3, and 4. Moreover, some coefficients are written down for all genus. (author)

  4. Rotation of 2D orthogonal polynomials

    Czech Academy of Sciences Publication Activity Database

    Yang, B.; Flusser, Jan; Kautský, J.

    2018-01-01

    Roč. 102, č. 1 (2018), s. 44-49 ISSN 0167-8655 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Rotation invariants * Orthogonal polynomials * Recurrent relation * Hermite-like polynomials * Hermite moments Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.995, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/flusser-0483250.pdf

  5. q-analogue of the Krawtchouk and Meixner orthogonal polynomials

    International Nuclear Information System (INIS)

    Campigotto, C.; Smirnov, Yu.F.; Enikeev, S.G.

    1993-06-01

    The comparative analysis of Krawtchouk polynomials on a uniform grid with Wigner D-functions for the SU(2) group is presented. As a result the partnership between corresponding properties of the polynomials and D-functions is established giving the group-theoretical interpretation of the Krawtchouk polynomials properties. In order to extend such an analysis on the quantum groups SU q (2) and SU q (1,1), q-analogues of Krawtchouk and Meixner polynomials of a discrete variable are studied. The total set of characteristics of these polynomials is calculated, including the orthogonality condition, normalization factor, recurrent relation, the explicit analytic expression, the Rodrigues formula, the difference derivative formula and various particular cases and values. (R.P.) 22 refs.; 2 tabs

  6. Skew-orthogonal polynomials and random matrix theory

    CERN Document Server

    Ghosh, Saugata

    2009-01-01

    Orthogonal polynomials satisfy a three-term recursion relation irrespective of the weight function with respect to which they are defined. This gives a simple formula for the kernel function, known in the literature as the Christoffel-Darboux sum. The availability of asymptotic results of orthogonal polynomials and the simple structure of the Christoffel-Darboux sum make the study of unitary ensembles of random matrices relatively straightforward. In this book, the author develops the theory of skew-orthogonal polynomials and obtains recursion relations which, unlike orthogonal polynomials, depend on weight functions. After deriving reduced expressions, called the generalized Christoffel-Darboux formulas (GCD), he obtains universal correlation functions and non-universal level densities for a wide class of random matrix ensembles using the GCD. The author also shows that once questions about higher order effects are considered (questions that are relevant in different branches of physics and mathematics) the ...

  7. Some properties of generalized self-reciprocal polynomials over finite fields

    Directory of Open Access Journals (Sweden)

    Ryul Kim

    2014-07-01

    Full Text Available Numerous results on self-reciprocal polynomials over finite fields have been studied. In this paper we generalize some of these to a-self reciprocal polynomials defined in [4]. We consider some properties of the divisibility of a-reciprocal polynomials and characterize the parity of the number of irreducible factors for a-self reciprocal polynomials over finite fields of odd characteristic.

  8. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  9. Multi-model polynomial chaos surrogate dictionary for Bayesian inference in elasticity problems

    KAUST Repository

    Contreras, Andres A.; Le Maî tre, Olivier P.; Aquino, Wilkins; Knio, Omar

    2016-01-01

    of stiff inclusions embedded in a soft matrix, mimicking tumors in soft tissues. We rely on a polynomial chaos (PC) surrogate to accelerate the inference process. The PC surrogate predicts the dependence of the displacements field with the random elastic

  10. Rodrigues formulas for the non-symmetric multivariable polynomials associated with the BCN-type root system

    International Nuclear Information System (INIS)

    Nishino, Akinori; Ujino, Hideaki; Komori, Yasushi; Wadati, Miki

    2000-01-01

    The non-symmetric Macdonald-Koornwinder polynomials are joint eigenfunctions of the commuting Cherednik operators which are constructed from the representation theory for the affine Hecke algebra corresponding to the BC N -type root system. We present the Rodrigues formula for the non-symmetric Macdonald-Koornwinder polynomials. The raising operators are derived from the realizations of the corresponding double affine Hecke algebra. In the quasi-classical limit, the above theory reduces to that of the BC N -type Sutherland model which describes many particles with inverse-square long-range interactions on a circle with one impurity. We also present the Rodrigues formula for the non-symmetric Jacobi polynomials of type BC N which are eigenstates of the BC N -type Sutherland model

  11. Analysis of Discrete L2 Projection on Polynomial Spaces with Random Evaluations

    KAUST Repository

    Migliorati, Giovanni; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul

    2014-01-01

    We analyze the problem of approximating a multivariate function by discrete least-squares projection on a polynomial space starting from random, noise-free observations. An area of possible application of such technique is uncertainty quantification for computational models. We prove an optimal convergence estimate, up to a logarithmic factor, in the univariate case, when the observation points are sampled in a bounded domain from a probability density function bounded away from zero and bounded from above, provided the number of samples scales quadratically with the dimension of the polynomial space. Optimality is meant in the sense that the weighted L2 norm of the error committed by the random discrete projection is bounded with high probability from above by the best L∞ error achievable in the given polynomial space, up to logarithmic factors. Several numerical tests are presented in both the univariate and multivariate cases, confirming our theoretical estimates. The numerical tests also clarify how the convergence rate depends on the number of sampling points, on the polynomial degree, and on the smoothness of the target function. © 2014 SFoCM.

  12. Analysis of Discrete L2 Projection on Polynomial Spaces with Random Evaluations

    KAUST Repository

    Migliorati, Giovanni

    2014-03-05

    We analyze the problem of approximating a multivariate function by discrete least-squares projection on a polynomial space starting from random, noise-free observations. An area of possible application of such technique is uncertainty quantification for computational models. We prove an optimal convergence estimate, up to a logarithmic factor, in the univariate case, when the observation points are sampled in a bounded domain from a probability density function bounded away from zero and bounded from above, provided the number of samples scales quadratically with the dimension of the polynomial space. Optimality is meant in the sense that the weighted L2 norm of the error committed by the random discrete projection is bounded with high probability from above by the best L∞ error achievable in the given polynomial space, up to logarithmic factors. Several numerical tests are presented in both the univariate and multivariate cases, confirming our theoretical estimates. The numerical tests also clarify how the convergence rate depends on the number of sampling points, on the polynomial degree, and on the smoothness of the target function. © 2014 SFoCM.

  13. Random regression models for detection of gene by environment interaction

    Directory of Open Access Journals (Sweden)

    Meuwissen Theo HE

    2007-02-01

    Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.

  14. Application of polynomial preconditioners to conservation laws

    NARCIS (Netherlands)

    Geurts, Bernardus J.; van Buuren, R.; Lu, H.

    2000-01-01

    Polynomial preconditioners which are suitable in implicit time-stepping methods for conservation laws are reviewed and analyzed. The preconditioners considered are either based on a truncation of a Neumann series or on Chebyshev polynomials for the inverse of the system-matrix. The latter class of

  15. Symmetric functions and orthogonal polynomials

    CERN Document Server

    Macdonald, I G

    1997-01-01

    One of the most classical areas of algebra, the theory of symmetric functions and orthogonal polynomials has long been known to be connected to combinatorics, representation theory, and other branches of mathematics. Written by perhaps the most famous author on the topic, this volume explains some of the current developments regarding these connections. It is based on lectures presented by the author at Rutgers University. Specifically, he gives recent results on orthogonal polynomials associated with affine Hecke algebras, surveying the proofs of certain famous combinatorial conjectures.

  16. Alternative regression models to assess increase in childhood BMI

    Directory of Open Access Journals (Sweden)

    Mansmann Ulrich

    2008-09-01

    Full Text Available Abstract Background Body mass index (BMI data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs, quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS. We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. Results GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. Conclusion GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  17. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  18. Polynomially Riesz elements | Živković-Zlatanović | Quaestiones ...

    African Journals Online (AJOL)

    A Banach algebra element ɑ ∈ A is said to be "polynomially Riesz", relative to the homomorphism T : A → B, if there exists a nonzero complex polynomial p(z) such that the image Tp ∈ B is quasinilpotent. Keywords: Homomorphism of Banach algebras, polynomially Riesz element, Fredholm spectrum, Browder element, ...

  19. Wavelet regression model in forecasting crude oil price

    Science.gov (United States)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  20. Symmetric integrable-polynomial factorization for symplectic one-turn-map tracking

    International Nuclear Information System (INIS)

    Shi, Jicong

    1993-01-01

    It was found that any homogeneous polynomial can be written as a sum of integrable polynomials of the same degree which Lie transformations can be evaluated exactly. By utilizing symplectic integrators, an integrable-polynomial factorization is developed to convert a symplectic map in the form of Dragt-Finn factorization into a product of Lie transformations associated with integrable polynomials. A small number of factorization bases of integrable polynomials enable one to use high order symplectic integrators so that the high-order spurious terms can be greatly suppressed. A symplectic map can thus be evaluated with desired accuracy

  1. Connections between the matching and chromatic polynomials

    Directory of Open Access Journals (Sweden)

    E. J. Farrell

    1992-01-01

    Full Text Available The main results established are (i a connection between the matching and chromatic polynomials and (ii a formula for the matching polynomial of a general complement of a subgraph of a graph. Some deductions on matching and chromatic equivalence and uniqueness are made.

  2. On Generalisation of Polynomials in Complex Plane

    Directory of Open Access Journals (Sweden)

    Maslina Darus

    2010-01-01

    Full Text Available The generalised Bell and Laguerre polynomials of fractional-order in complex z-plane are defined. Some properties are studied. Moreover, we proved that these polynomials are univalent solutions for second order differential equations. Also, the Laguerre-type of some special functions are introduced.

  3. Okounkov's BC-Type Interpolation Macdonald Polynomials and Their q=1 Limit

    NARCIS (Netherlands)

    Koornwinder, T.H.

    2015-01-01

    This paper surveys eight classes of polynomials associated with A-type and BC-type root systems: Jack, Jacobi, Macdonald and Koornwinder polynomials and interpolation (or shifted) Jack and Macdonald polynomials and their BC-type extensions. Among these the BC-type interpolation Jack polynomials were

  4. Complex Polynomial Vector Fields

    DEFF Research Database (Denmark)

    Dias, Kealey

    vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...... of parameter spaces into structurally stable domains, and a description of the bifurcations. For this reason, the talk will focus on these questions for complex polynomial vector fields.......The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...

  5. Stabilization of nonlinear systems using sampled-data output-feedback fuzzy controller based on polynomial-fuzzy-model-based control approach.

    Science.gov (United States)

    Lam, H K

    2012-02-01

    This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.

  6. Interlacing of zeros of quasi-orthogonal meixner polynomials | Driver ...

    African Journals Online (AJOL)

    ... interlacing of zeros of quasi-orthogonal Meixner polynomials Mn(x;β; c) with the zeros of their nearest orthogonal counterparts Mt(x;β + k; c), l; n ∈ ℕ, k ∈ {1; 2}; is also discussed. Mathematics Subject Classication (2010): 33C45, 42C05. Key words: Discrete orthogonal polynomials, quasi-orthogonal polynomials, Meixner

  7. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  8. Discriminants and functional equations for polynomials orthogonal on the unit circle

    International Nuclear Information System (INIS)

    Ismail, M.E.H.; Witte, N.S.

    2000-01-01

    We derive raising and lowering operators for orthogonal polynomials on the unit circle and find second order differential and q-difference equations for these polynomials. A general functional equation is found which allows one to relate the zeros of the orthogonal polynomials to the stationary values of an explicit quasi-energy and implies recurrences on the orthogonal polynomial coefficients. We also evaluate the discriminants and quantized discriminants of polynomials orthogonal on the unit circle

  9. Spatial stochastic regression modelling of urban land use

    International Nuclear Information System (INIS)

    Arshad, S H M; Jaafar, J; Abiden, M Z Z; Latif, Z A; Rasam, A R A

    2014-01-01

    Urbanization is very closely linked to industrialization, commercialization or overall economic growth and development. This results in innumerable benefits of the quantity and quality of the urban environment and lifestyle but on the other hand contributes to unbounded development, urban sprawl, overcrowding and decreasing standard of living. Regulation and observation of urban development activities is crucial. The understanding of urban systems that promotes urban growth are also essential for the purpose of policy making, formulating development strategies as well as development plan preparation. This study aims to compare two different stochastic regression modeling techniques for spatial structure models of urban growth in the same specific study area. Both techniques will utilize the same datasets and their results will be analyzed. The work starts by producing an urban growth model by using stochastic regression modeling techniques namely the Ordinary Least Square (OLS) and Geographically Weighted Regression (GWR). The two techniques are compared to and it is found that, GWR seems to be a more significant stochastic regression model compared to OLS, it gives a smaller AICc (Akaike's Information Corrected Criterion) value and its output is more spatially explainable

  10. Multi-model polynomial chaos surrogate dictionary for Bayesian inference in elasticity problems

    KAUST Repository

    Contreras, Andres A.

    2016-09-19

    A method is presented for inferring the presence of an inclusion inside a domain; the proposed approach is suitable to be used in a diagnostic device with low computational power. Specifically, we use the Bayesian framework for the inference of stiff inclusions embedded in a soft matrix, mimicking tumors in soft tissues. We rely on a polynomial chaos (PC) surrogate to accelerate the inference process. The PC surrogate predicts the dependence of the displacements field with the random elastic moduli of the materials, and are computed by means of the stochastic Galerkin (SG) projection method. Moreover, the inclusion\\'s geometry is assumed to be unknown, and this is addressed by using a dictionary consisting of several geometrical models with different configurations. A model selection approach based on the evidence provided by the data (Bayes factors) is used to discriminate among the different geometrical models and select the most suitable one. The idea of using a dictionary of pre-computed geometrical models helps to maintain the computational cost of the inference process very low, as most of the computational burden is carried out off-line for the resolution of the SG problems. Numerical tests are used to validate the methodology, assess its performance, and analyze the robustness to model errors. © 2016 Elsevier Ltd

  11. On the Lorentz degree of a product of polynomials

    KAUST Repository

    Ait-Haddou, Rachid

    2015-01-01

    In this note, we negatively answer two questions of T. Erdélyi (1991, 2010) on possible lower bounds on the Lorentz degree of product of two polynomials. We show that the correctness of one question for degree two polynomials is a direct consequence of a result of Barnard et al. (1991) on polynomials with nonnegative coefficients.

  12. Strong result for real zeros of random algebraic polynomials

    Directory of Open Access Journals (Sweden)

    T. Uno

    2001-01-01

    Full Text Available An estimate is given for the lower bound of real zeros of random algebraic polynomials whose coefficients are non-identically distributed dependent Gaussian random variables. Moreover, our estimated measure of the exceptional set, which is independent of the degree of the polynomials, tends to zero as the degree of the polynomial tends to infinity.

  13. Large degree asymptotics of generalized Bessel polynomials

    NARCIS (Netherlands)

    J.L. López; N.M. Temme (Nico)

    2011-01-01

    textabstractAsymptotic expansions are given for large values of $n$ of the generalized Bessel polynomials $Y_n^\\mu(z)$. The analysis is based on integrals that follow from the generating functions of the polynomials. A new simple expansion is given that is valid outside a compact neighborhood of the

  14. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  15. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  16. Linear operator pencils on Lie algebras and Laurent biorthogonal polynomials

    International Nuclear Information System (INIS)

    Gruenbaum, F A; Vinet, Luc; Zhedanov, Alexei

    2004-01-01

    We study operator pencils on generators of the Lie algebras sl 2 and the oscillator algebra. These pencils are linear in a spectral parameter λ. The corresponding generalized eigenvalue problem gives rise to some sets of orthogonal polynomials and Laurent biorthogonal polynomials (LBP) expressed in terms of the Gauss 2 F 1 and degenerate 1 F 1 hypergeometric functions. For special choices of the parameters of the pencils, we identify the resulting polynomials with the Hendriksen-van Rossum LBP which are widely believed to be the biorthogonal analogues of the classical orthogonal polynomials. This places these examples under the umbrella of the generalized bispectral problem which is considered here. Other (non-bispectral) cases give rise to some 'nonclassical' orthogonal polynomials including Tricomi-Carlitz and random-walk polynomials. An application to solutions of relativistic Toda chain is considered

  17. Higher order branching of periodic orbits from polynomial isochrones

    Directory of Open Access Journals (Sweden)

    B. Toni

    1999-09-01

    Full Text Available We discuss the higher order local bifurcations of limit cycles from polynomial isochrones (linearizable centers when the linearizing transformation is explicitly known and yields a polynomial perturbation one-form. Using a method based on the relative cohomology decomposition of polynomial one-forms complemented with a step reduction process, we give an explicit formula for the overall upper bound of branch points of limit cycles in an arbitrary $n$ degree polynomial perturbation of the linear isochrone, and provide an algorithmic procedure to compute the upper bound at successive orders. We derive a complete analysis of the nonlinear cubic Hamiltonian isochrone and show that at most nine branch points of limit cycles can bifurcate in a cubic polynomial perturbation. Moreover, perturbations with exactly two, three, four, six, and nine local families of limit cycles may be constructed.

  18. Multiple regression technique for Pth degree polynominals with and without linear cross products

    Science.gov (United States)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  19. Estimation of Genetic Parameters for First Lactation Monthly Test-day Milk Yields using Random Regression Test Day Model in Karan Fries Cattle

    Directory of Open Access Journals (Sweden)

    Ajay Singh

    2016-06-01

    Full Text Available A single trait linear mixed random regression test-day model was applied for the first time for analyzing the first lactation monthly test-day milk yield records in Karan Fries cattle. The test-day milk yield data was modeled using a random regression model (RRM considering different order of Legendre polynomial for the additive genetic effect (4th order and the permanent environmental effect (5th order. Data pertaining to 1,583 lactation records spread over a period of 30 years were recorded and analyzed in the study. The variance component, heritability and genetic correlations among test-day milk yields were estimated using RRM. RRM heritability estimates of test-day milk yield varied from 0.11 to 0.22 in different test-day records. The estimates of genetic correlations between different test-day milk yields ranged 0.01 (test-day 1 [TD-1] and TD-11 to 0.99 (TD-4 and TD-5. The magnitudes of genetic correlations between test-day milk yields decreased as the interval between test-days increased and adjacent test-day had higher correlations. Additive genetic and permanent environment variances were higher for test-day milk yields at both ends of lactation. The residual variance was observed to be lower than the permanent environment variance for all the test-day milk yields.

  20. Efficient computation of global sensitivity indices using sparse polynomial chaos expansions

    International Nuclear Information System (INIS)

    Blatman, Geraud; Sudret, Bruno

    2010-01-01

    Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol' indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2-3 orders of magnitude smaller than that associated with Monte Carlo simulation.

  1. Efficient computation of global sensitivity indices using sparse polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Blatman, Geraud, E-mail: geraud.blatman@edf.f [Clermont Universite, IFMA, EA 3867, Laboratoire de Mecanique et Ingenieries, BP 10448, F-63000 Clermont-Ferrand (France); EDF, R and D Division - Site des Renardieres, F-77818 Moret-sur-Loing (France); Sudret, Bruno, E-mail: sudret@phimeca.co [Clermont Universite, IFMA, EA 3867, Laboratoire de Mecanique et Ingenieries, BP 10448, F-63000 Clermont-Ferrand (France); Phimeca Engineering, Centre d' Affaires du Zenith, 34 rue de Sarlieve, F-63800 Cournon d' Auvergne (France)

    2010-11-15

    Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol' indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2-3 orders of magnitude smaller than that associated with Monte Carlo simulation.

  2. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  3. Solution of linear transport equation using Chebyshev polynomials and Laplace transform

    International Nuclear Information System (INIS)

    Cardona, A.V.; Vilhena, M.T.M.B. de

    1994-01-01

    The Chebyshev polynomials and the Laplace transform are combined to solve, analytically, the linear transport equation in planar geometry, considering isotropic scattering and the one-group model. Numerical simulation is presented. (author)

  4. On Some Extensions of Szasz Operators Including Boas-Buck-Type Polynomials

    Directory of Open Access Journals (Sweden)

    Sezgin Sucu

    2012-01-01

    Full Text Available This paper is concerned with a new sequence of linear positive operators which generalize Szasz operators including Boas-Buck-type polynomials. We establish a convergence theorem for these operators and give the quantitative estimation of the approximation process by using a classical approach and the second modulus of continuity. Some explicit examples of our operators involving Laguerre polynomials, Charlier polynomials, and Gould-Hopper polynomials are given. Moreover, a Voronovskaya-type result is obtained for the operators containing Gould-Hopper polynomials.

  5. Superiority of Bessel function over Zernicke polynomial as base ...

    Indian Academy of Sciences (India)

    Abstract. Here we describe the superiority of Bessel function as base function for radial expan- sion over Zernicke polynomial in the tomographic reconstruction technique. The causes for the superiority have been described in detail. The superiority has been shown both with simulated data for Kadomtsev's model for ...

  6. Moments, positive polynomials and their applications

    CERN Document Server

    Lasserre, Jean Bernard

    2009-01-01

    Many important applications in global optimization, algebra, probability and statistics, applied mathematics, control theory, financial mathematics, inverse problems, etc. can be modeled as a particular instance of the Generalized Moment Problem (GMP) . This book introduces a new general methodology to solve the GMP when its data are polynomials and basic semi-algebraic sets. This methodology combines semidefinite programming with recent results from real algebraic geometry to provide a hierarchy of semidefinite relaxations converging to the desired optimal value. Applied on appropriate cones,

  7. On associated polynomials and decay rates for birth-death processes

    NARCIS (Netherlands)

    van Doorn, Erik A.

    2001-01-01

    We consider sequences of orthogonal polynomials and pursue the question of how (partial) knowledge of the orthogonalizing measure for the {\\it associated polynomials} can lead to information about the orthogonalizing measure for the original polynomials. In particular, we relate the supports of the

  8. On associated polynomials and decay rates for birth-death processes

    NARCIS (Netherlands)

    van Doorn, Erik A.

    2003-01-01

    We consider sequences of orthogonal polynomials and pursue the question of how (partial) knowledge of the orthogonalizing measure for the associated polynomials can lead to information about the orthogonalizing measure for the original polynomials. In particular, we relate the supports of the two

  9. SIMPLIFIED PREDICTIVE MODELS FOR CO₂ SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #3 STATISTICAL LEARNING BASED MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Schuetter, Jared

    2014-11-01

    We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.

  10. Modelos de regressão aleatória para avaliação da curva de crescimento em matrizes de codorna de corte Random regression models for growth evaluation of meat-type quail hens

    Directory of Open Access Journals (Sweden)

    Bruno Bastos Teixeira

    2012-09-01

    Full Text Available Objetivou-se comparar diferentes modelos de regressão aleatória por meio de funções polinomiais de Legendre de diferentes ordens, para avaliar o que melhor se ajusta ao estudo genético da curva de crescimento de codornas de corte. Foram avaliados dados de 2136 matrizes de codorna de corte, dos quais 1026 pertenciam ao grupo genético UFV1 e 1110 ao grupo UFV2. As codornas foram pesadas nos 1°, 7°, 14°, 21°, 28°, 35°, 42°, 77°, 112° e 147° dias de idade e seus pesos utilizados para a análise. Foram testadas duas possíveis modelagens de variância residual heterogênea, sendo agrupadas em 3 e 5 classes de idade. Após, foi realizado o estudo do modelo de regressão aleatória que melhor aplica-se à curva de crescimento das codornas. A comparação entre os modelos foi feita pelo Critério de Informação de Akaike (AIC, Critério de Informação Bayesiano de Schwarz (BIC, Logaritmo da função de verossimilhança (Log e L e teste da razão de verossimilhança (LRT, ao nível de 1%. O modelo que considerou a heterogeneidade de variância residual CL3 mostrou-se adequado à linhagem UFV1, e o modelo CL5 à linhagem UFV2. Uma função polinomial de Legendre com ordem 5, para efeito genético aditivo direto e 5 para efeito permanente de animal, para a linhagem UFV1 e, com ordem 3, para efeito genético aditivo direto e 5 para efeito permanente de animal para a linhagem UFV2, deve ser utilizada na avaliação genética da curva de crescimento das codornas de corte.The objective was to compare different random regression models using Legendre polynomial functions of different orders, to evaluate what best fits the genetic study of the growth curve of meat quails. It was evaluated data from 2136 cut dies quail, of which 1026 belonged to genetic group UFV1 and 1110 the group UFV2. Quail were weighed at 10, 70, 140, 210, 280, 350, 420, 770, 1120 and 1470 days of age, and weights used for the analysis. It was tested two possible modeling

  11. Spin-singlet quantum Hall states and Jack polynomials with a prescribed symmetry

    International Nuclear Information System (INIS)

    Estienne, Benoit; Bernevig, B. Andrei

    2012-01-01

    We show that a large class of bosonic spin-singlet Fractional Quantum Hall model wavefunctions and their quasihole excitations can be written in terms of Jack polynomials with a prescribed symmetry. Our approach describes new spin-singlet quantum Hall states at filling fraction ν=(2k)/(2r-1) and generalizes the (k,r) spin-polarized Jack polynomial states. The NASS and Halperin spin-singlet states emerge as specific cases of our construction. The polynomials express many-body states which contain configurations obtained from a root partition through a generalized squeezing procedure involving spin and orbital degrees of freedom. The corresponding generalized Pauli principle for root partitions is obtained, allowing for counting of the quasihole states. We also extract the central charge and quasihole scaling dimension, and propose a conjecture for the underlying CFT of the (k,r) spin-singlet Jack states.

  12. Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos

    Directory of Open Access Journals (Sweden)

    Jianbin Guo

    2014-07-01

    Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.

  13. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    Science.gov (United States)

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  14. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    Directory of Open Access Journals (Sweden)

    Anthony Papadopoulos

    Full Text Available The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  15. Some Polynomials Associated with the r-Whitney Numbers

    Indian Academy of Sciences (India)

    26

    Abstract. In the present article we study three families of polynomials associated with ... [29, 39] for their relations with the Bernoulli and generalized Bernoulli polynomials and ... generating functions in a similar way as in the classical cases.

  16. Killings, duality and characteristic polynomials

    Science.gov (United States)

    Álvarez, Enrique; Borlaf, Javier; León, José H.

    1998-03-01

    In this paper the complete geometrical setting of (lowest order) abelian T-duality is explored with the help of some new geometrical tools (the reduced formalism). In particular, all invariant polynomials (the integrands of the characteristic classes) can be explicitly computed for the dual model in terms of quantities pertaining to the original one and with the help of the canonical connection whose intrinsic characterization is given. Using our formalism the physically, and T-duality invariant, relevant result that top forms are zero when there is an isometry without fixed points is easily proved. © 1998

  17. The Bessel polynomials and their differential operators

    International Nuclear Information System (INIS)

    Onyango Otieno, V.P.

    1987-10-01

    Differential operators associated with the ordinary and the generalized Bessel polynomials are defined. In each case the commutator bracket is constructed and shows that the differential operators associated with the Bessel polynomials and their generalized form are not commutative. Some applications of these operators to linear differential equations are also discussed. (author). 4 refs

  18. Geographically weighted regression model on poverty indicator

    Science.gov (United States)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  19. Conference on Commutative rings, integer-valued polynomials and polynomial functions

    CERN Document Server

    Frisch, Sophie; Glaz, Sarah; Commutative Algebra : Recent Advances in Commutative Rings, Integer-Valued Polynomials, and Polynomial Functions

    2014-01-01

    This volume presents a multi-dimensional collection of articles highlighting recent developments in commutative algebra. It also includes an extensive bibliography and lists a substantial number of open problems that point to future directions of research in the represented subfields. The contributions cover areas in commutative algebra that have flourished in the last few decades and are not yet well represented in book form. Highlighted topics and research methods include Noetherian and non- Noetherian ring theory as well as integer-valued polynomials and functions. Specific topics include: ·    Homological dimensions of Prüfer-like rings ·    Quasi complete rings ·    Total graphs of rings ·    Properties of prime ideals over various rings ·    Bases for integer-valued polynomials ·    Boolean subrings ·    The portable property of domains ·    Probabilistic topics in Intn(D) ·    Closure operations in Zariski-Riemann spaces of valuation domains ·    Stability of do...

  20. Non-hermitian symmetric N = 2 coset models, Poincare polynomials, and string compactification

    International Nuclear Information System (INIS)

    Fuchs, J.; Schweigert, C.

    1994-01-01

    The field identification problem, including fixed point resolution, is solved for the non-hermitian symmetric N = 2 superconformal coset theories. Thereby these models are finally identified as well-defined modular invariant conformal field theories. As an application, the theories are used as subtheories in N = 2 tensor products with c = 9, which in turn are taken as the inner sector of heterotic superstring compactifications. All string theories of this type are classified, and the chiral ring as well as the number of massless generations and anti-generations are computed with the help of the extended Poincare polynomial. Several equivalences between a priori different non-hermitian coset theories show up; in particular there is a level-rank duality for an infinite series of coset theories based on C-type Lie algebras. Further, some general results for generic N = 2 coset theories are proven: a simple formula for the number of identification currents is found, and it is shown that the set of Ramond ground states of any N = 2 coset model is invariant under charge conjugation. (orig.)

  1. Integer linear models with a polynomial number of variables and constraints for some classical combinatorial optimization problems

    Directory of Open Access Journals (Sweden)

    Nelson Maculan

    2003-01-01

    Full Text Available We present integer linear models with a polynomial number of variables and constraints for combinatorial optimization problems in graphs: optimum elementary cycles, optimum elementary paths and optimum tree problems.Apresentamos modelos lineares inteiros com um número polinomial de variáveis e restrições para problemas de otimização combinatória em grafos: ciclos elementares ótimos, caminhos elementares ótimos e problemas em árvores ótimas.

  2. Sibling curves of quadratic polynomials | Wiggins | Quaestiones ...

    African Journals Online (AJOL)

    Sibling curves were demonstrated in [1, 2] as a novel way to visualize the zeroes of real valued functions. In [3] it was shown that a polynomial of degree n has n sibling curves. This paper focuses on the algebraic and geometric properites of the sibling curves of real and complex quadratic polynomials. Key words: Quadratic ...

  3. Dual exponential polynomials and linear differential equations

    Science.gov (United States)

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne

    2018-01-01

    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  4. Generalized Freud's equation and level densities with polynomial

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 81; Issue 2. Generalized Freud's equation and level densities with polynomial potential. Akshat Boobna Saugata Ghosh. Research Articles Volume 81 ... Keywords. Orthogonal polynomial; Freud's equation; Dyson–Mehta method; methods of resolvents; level density.

  5. Polynomial fuzzy observer designs: a sum-of-squares approach.

    Science.gov (United States)

    Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O

    2012-10-01

    This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.

  6. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  7. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  8. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  9. Third-order polynomial model for analyzing stickup state laminated structure in flexible electronics

    Science.gov (United States)

    Meng, Xianhong; Wang, Zihao; Liu, Boya; Wang, Shuodao

    2018-02-01

    Laminated hard-soft integrated structures play a significant role in the fabrication and development of flexible electronics devices. Flexible electronics have advantageous characteristics such as soft and light-weight, can be folded, twisted, flipped inside-out, or be pasted onto other surfaces of arbitrary shapes. In this paper, an analytical model is presented to study the mechanics of laminated hard-soft structures in flexible electronics under a stickup state. Third-order polynomials are used to describe the displacement field, and the principle of virtual work is adopted to derive the governing equations and boundary conditions. The normal strain and the shear stress along the thickness direction in the bi-material region are obtained analytically, which agree well with the results from finite element analysis. The analytical model can be used to analyze stickup state laminated structures, and can serve as a valuable reference for the failure prediction and optimal design of flexible electronics in the future.

  10. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  11. An overview on polynomial approximation of NP-hard problems

    Directory of Open Access Journals (Sweden)

    Paschos Vangelis Th.

    2009-01-01

    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  12. Efficient modeling of photonic crystals with local Hermite polynomials

    International Nuclear Information System (INIS)

    Boucher, C. R.; Li, Zehao; Albrecht, J. D.; Ram-Mohan, L. R.

    2014-01-01

    Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (plane wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits

  13. Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.

    Science.gov (United States)

    Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O

    2009-04-01

    This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.

  14. A note on the zeros of Freud-Sobolev orthogonal polynomials

    Science.gov (United States)

    Moreno-Balcazar, Juan J.

    2007-10-01

    We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.

  15. About the solvability of matrix polynomial equations

    OpenAIRE

    Netzer, Tim; Thom, Andreas

    2016-01-01

    We study self-adjoint matrix polynomial equations in a single variable and prove existence of self-adjoint solutions under some assumptions on the leading form. Our main result is that any self-adjoint matrix polynomial equation of odd degree with non-degenerate leading form can be solved in self-adjoint matrices. We also study equations of even degree and equations in many variables.

  16. Short-term electricity prices forecasting based on support vector regression and Auto-regressive integrated moving average modeling

    International Nuclear Information System (INIS)

    Che Jinxing; Wang Jianzhou

    2010-01-01

    In this paper, we present the use of different mathematical models to forecast electricity price under deregulated power. A successful prediction tool of electricity price can help both power producers and consumers plan their bidding strategies. Inspired by that the support vector regression (SVR) model, with the ε-insensitive loss function, admits of the residual within the boundary values of ε-tube, we propose a hybrid model that combines both SVR and Auto-regressive integrated moving average (ARIMA) models to take advantage of the unique strength of SVR and ARIMA models in nonlinear and linear modeling, which is called SVRARIMA. A nonlinear analysis of the time-series indicates the convenience of nonlinear modeling, the SVR is applied to capture the nonlinear patterns. ARIMA models have been successfully applied in solving the residuals regression estimation problems. The experimental results demonstrate that the model proposed outperforms the existing neural-network approaches, the traditional ARIMA models and other hybrid models based on the root mean square error and mean absolute percentage error.

  17. Two polynomial representations of experimental design

    OpenAIRE

    Notari, Roberto; Riccomagno, Eva; Rogantin, Maria-Piera

    2007-01-01

    In the context of algebraic statistics an experimental design is described by a set of polynomials called the design ideal. This, in turn, is generated by finite sets of polynomials. Two types of generating sets are mostly used in the literature: Groebner bases and indicator functions. We briefly describe them both, how they are used in the analysis and planning of a design and how to switch between them. Examples include fractions of full factorial designs and designs for mixture experiments.

  18. Stable piecewise polynomial vector fields

    Directory of Open Access Journals (Sweden)

    Claudio Pessoa

    2012-09-01

    Full Text Available Let $N={y>0}$ and $S={y<0}$ be the semi-planes of $mathbb{R}^2$ having as common boundary the line $D={y=0}$. Let $X$ and $Y$ be polynomial vector fields defined in $N$ and $S$, respectively, leading to a discontinuous piecewise polynomial vector field $Z=(X,Y$. This work pursues the stability and the transition analysis of solutions of $Z$ between $N$ and $S$, started by Filippov (1988 and Kozlova (1984 and reformulated by Sotomayor-Teixeira (1995 in terms of the regularization method. This method consists in analyzing a one parameter family of continuous vector fields $Z_{epsilon}$, defined by averaging $X$ and $Y$. This family approaches $Z$ when the parameter goes to zero. The results of Sotomayor-Teixeira and Sotomayor-Machado (2002 providing conditions on $(X,Y$ for the regularized vector fields to be structurally stable on planar compact connected regions are extended to discontinuous piecewise polynomial vector fields on $mathbb{R}^2$. Pertinent genericity results for vector fields satisfying the above stability conditions are also extended to the present case. A procedure for the study of discontinuous piecewise vector fields at infinity through a compactification is proposed here.

  19. vs. a polynomial chaos-based MCMC

    KAUST Repository

    Siripatana, Adil

    2014-08-01

    Bayesian Inference of Manning\\'s n coefficient in a Storm Surge Model Framework: comparison between Kalman lter and polynomial based method Adil Siripatana Conventional coastal ocean models solve the shallow water equations, which describe the conservation of mass and momentum when the horizontal length scale is much greater than the vertical length scale. In this case vertical pressure gradients in the momentum equations are nearly hydrostatic. The outputs of coastal ocean models are thus sensitive to the bottom stress terms de ned through the formulation of Manning\\'s n coefficients. This thesis considers the Bayesian inference problem of the Manning\\'s n coefficient in the context of storm surge based on the coastal ocean ADCIRC model. In the first part of the thesis, we apply an ensemble-based Kalman filter, the singular evolutive interpolated Kalman (SEIK) filter to estimate both a constant Manning\\'s n coefficient and a 2-D parameterized Manning\\'s coefficient on one ideal and one of more realistic domain using observation system simulation experiments (OSSEs). We study the sensitivity of the system to the ensemble size. we also access the benefits from using an in ation factor on the filter performance. To study the limitation of the Guassian restricted assumption on the SEIK lter, 5 we also implemented in the second part of this thesis a Markov Chain Monte Carlo (MCMC) method based on a Generalized Polynomial chaos (gPc) approach for the estimation of the 1-D and 2-D Mannning\\'s n coe cient. The gPc is used to build a surrogate model that imitate the ADCIRC model in order to make the computational cost of implementing the MCMC with the ADCIRC model reasonable. We evaluate the performance of the MCMC-gPc approach and study its robustness to di erent OSSEs scenario. we also compare its estimates with those resulting from SEIK in term of parameter estimates and full distributions. we present a full analysis of the solution of these two methods, of the

  20. Modeling oil production based on symbolic regression

    International Nuclear Information System (INIS)

    Yang, Guangfei; Li, Xianneng; Wang, Jianliang; Lian, Lian; Ma, Tieju

    2015-01-01

    Numerous models have been proposed to forecast the future trends of oil production and almost all of them are based on some predefined assumptions with various uncertainties. In this study, we propose a novel data-driven approach that uses symbolic regression to model oil production. We validate our approach on both synthetic and real data, and the results prove that symbolic regression could effectively identify the true models beneath the oil production data and also make reliable predictions. Symbolic regression indicates that world oil production will peak in 2021, which broadly agrees with other techniques used by researchers. Our results also show that the rate of decline after the peak is almost half the rate of increase before the peak, and it takes nearly 12 years to drop 4% from the peak. These predictions are more optimistic than those in several other reports, and the smoother decline will provide the world, especially the developing countries, with more time to orchestrate mitigation plans. -- Highlights: •A data-driven approach has been shown to be effective at modeling the oil production. •The Hubbert model could be discovered automatically from data. •The peak of world oil production is predicted to appear in 2021. •The decline rate after peak is half of the increase rate before peak. •Oil production projected to decline 4% post-peak

  1. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

    Directory of Open Access Journals (Sweden)

    Luise A Seeker

    Full Text Available Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1 characterize the change in bovine relative leukocyte TL (RLTL across the lifetime in Holstein Friesian dairy cattle, 2 estimate genetic parameters of RLTL over time and 3 investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08 and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954.

  2. q-Bernoulli numbers and q-Bernoulli polynomials revisited

    Directory of Open Access Journals (Sweden)

    Kim Taekyun

    2011-01-01

    Full Text Available Abstract This paper performs a further investigation on the q-Bernoulli numbers and q-Bernoulli polynomials given by Acikgöz et al. (Adv Differ Equ, Article ID 951764, 9, 2010, some incorrect properties are revised. It is point out that the generating function for the q-Bernoulli numbers and polynomials is unreasonable. By using the theorem of Kim (Kyushu J Math 48, 73-86, 1994 (see Equation 9, some new generating functions for the q-Bernoulli numbers and polynomials are shown. Mathematics Subject Classification (2000 11B68, 11S40, 11S80

  3. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    Science.gov (United States)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  4. Fast beampattern evaluation by polynomial rooting

    Science.gov (United States)

    Häcker, P.; Uhlich, S.; Yang, B.

    2011-07-01

    Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.

  5. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  6. Guts of surfaces and the colored Jones polynomial

    CERN Document Server

    Futer, David; Purcell, Jessica

    2013-01-01

    This monograph derives direct and concrete relations between colored Jones polynomials and the topology of incompressible spanning surfaces in knot and link complements. Under mild diagrammatic hypotheses, we prove that the growth of the degree of the colored Jones polynomials is a boundary slope of an essential surface in the knot complement. We show that certain coefficients of the polynomial measure how far this surface is from being a fiber for the knot; in particular, the surface is a fiber if and only if a particular coefficient vanishes. We also relate hyperbolic volume to colored Jones polynomials. Our method is to generalize the checkerboard decompositions of alternating knots. Under mild diagrammatic hypotheses, we show that these surfaces are essential, and obtain an ideal polyhedral decomposition of their complement. We use normal surface theory to relate the pieces of the JSJ decomposition of the  complement to the combinatorics of certain surface spines (state graphs). Since state graphs have p...

  7. Computing Tutte polynomials of contact networks in classrooms

    Science.gov (United States)

    Hincapié, Doracelly; Ospina, Juan

    2013-05-01

    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  8. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  9. The APT model as reduced-rank regression

    NARCIS (Netherlands)

    Bekker, P.A.; Dobbelstein, P.; Wansbeek, T.J.

    Integrating the two steps of an arbitrage pricing theory (APT) model leads to a reduced-rank regression (RRR) model. So the results on RRR can be used to estimate APT models, making estimation very simple. We give a succinct derivation of estimation of RRR, derive the asymptotic variance of RRR

  10. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  11. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... The research identified logistic regression as a powerful tool for analysis of DMSMS and further developed twenty models attempting to identify the "best" way to model and predict DMSMS using logistic regression...

  13. Simplified polynomial representation of cross sections for reactor calculation

    International Nuclear Information System (INIS)

    Dias, A.M.; Sakai, M.

    1985-01-01

    It is shown a simplified representation of a cross section library generated by transport theory using the cell model of Wigner-Seitz for typical PWR fuel elements. The effect of burnup evolution through tables of reference cross sections and the effect of the variation of the reactor operation parameters considered by adjusted polynomials are presented. (M.C.K.) [pt

  14. Explicit formulae for the generalized Hermite polynomials in superspace

    International Nuclear Information System (INIS)

    Desrosiers, Patrick; Lapointe, Luc; Mathieu, Pierre

    2004-01-01

    We provide explicit formulae for the orthogonal eigenfunctions of the supersymmetric extension of the rational Calogero-Moser-Sutherland model with harmonic confinement, i.e., the generalized Hermite (or Hi-Jack) polynomials in superspace. The construction relies on the triangular action of the Hamiltonian on the supermonomial basis. This translates into determinantal expressions for the Hamiltonian's eigenfunctions

  15. Root and Critical Point Behaviors of Certain Sums of Polynomials

    Indian Academy of Sciences (India)

    13

    There is an extensive literature concerning roots of sums of polynomials. Many papers and books([5], [6],. [7]) have written about these polynomials. Perhaps the most immediate question of sums of polynomials,. A + B = C, is “given bounds for the roots of A and B, what bounds can be given for the roots of C?” By. Fell [3], if ...

  16. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  17. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    Science.gov (United States)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension

  18. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1 o meridian arc, and the length of a pendulum beating seconds, as a function of sin 2 (latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had its

  19. The chromatic polynomial and list colorings

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2009-01-01

    We prove that, if a graph has a list of k available colors at every vertex, then the number of list-colorings is at least the chromatic polynomial evaluated at k when k is sufficiently large compared to the number of vertices of the graph.......We prove that, if a graph has a list of k available colors at every vertex, then the number of list-colorings is at least the chromatic polynomial evaluated at k when k is sufficiently large compared to the number of vertices of the graph....

  20. BSDEs with polynomial growth generators

    Directory of Open Access Journals (Sweden)

    Philippe Briand

    2000-01-01

    Full Text Available In this paper, we give existence and uniqueness results for backward stochastic differential equations when the generator has a polynomial growth in the state variable. We deal with the case of a fixed terminal time, as well as the case of random terminal time. The need for this type of extension of the classical existence and uniqueness results comes from the desire to provide a probabilistic representation of the solutions of semilinear partial differential equations in the spirit of a nonlinear Feynman-Kac formula. Indeed, in many applications of interest, the nonlinearity is polynomial, e.g, the Allen-Cahn equation or the standard nonlinear heat and Schrödinger equations.

  1. AIRLINE ACTIVITY FORECASTING BY REGRESSION MODELS

    Directory of Open Access Journals (Sweden)

    Н. Білак

    2012-04-01

    Full Text Available Proposed linear and nonlinear regression models, which take into account the equation of trend and seasonality indices for the analysis and restore the volume of passenger traffic over the past period of time and its prediction for future years, as well as the algorithm of formation of these models based on statistical analysis over the years. The desired model is the first step for the synthesis of more complex models, which will enable forecasting of passenger (income level airline with the highest accuracy and time urgency.

  2. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  3. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Science.gov (United States)

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  4. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Directory of Open Access Journals (Sweden)

    Waddah Waheeb

    Full Text Available Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN and the Dynamic Ridge Polynomial Neural Network (DRPNN. Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  5. Algebraic calculations for spectrum of superintegrable system from exceptional orthogonal polynomials

    Science.gov (United States)

    Hoque, Md. Fazlul; Marquette, Ian; Post, Sarah; Zhang, Yao-Zhong

    2018-04-01

    We introduce an extended Kepler-Coulomb quantum model in spherical coordinates. The Schrödinger equation of this Hamiltonian is solved in these coordinates and it is shown that the wave functions of the system can be expressed in terms of Laguerre, Legendre and exceptional Jacobi polynomials (of hypergeometric type). We construct ladder and shift operators based on the corresponding wave functions and obtain their recurrence formulas. These recurrence relations are used to construct higher-order, algebraically independent integrals of motion to prove superintegrability of the Hamiltonian. The integrals form a higher rank polynomial algebra. By constructing the structure functions of the associated deformed oscillator algebras we derive the degeneracy of energy spectrum of the superintegrable system.

  6. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  7. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  8. Polynomial chaos functions and stochastic differential equations

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2006-01-01

    The Karhunen-Loeve procedure and the associated polynomial chaos expansion have been employed to solve a simple first order stochastic differential equation which is typical of transport problems. Because the equation has an analytical solution, it provides a useful test of the efficacy of polynomial chaos. We find that the convergence is very rapid in some cases but that the increased complexity associated with many random variables can lead to very long computational times. The work is illustrated by exact and approximate solutions for the mean, variance and the probability distribution itself. The usefulness of a white noise approximation is also assessed. Extensive numerical results are given which highlight the weaknesses and strengths of polynomial chaos. The general conclusion is that the method is promising but requires further detailed study by application to a practical problem in transport theory

  9. Minimal residual method stronger than polynomial preconditioning

    Energy Technology Data Exchange (ETDEWEB)

    Faber, V.; Joubert, W.; Knill, E. [Los Alamos National Lab., NM (United States)] [and others

    1994-12-31

    Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.

  10. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  11. Bernoulli numbers and polynomials from a more general point of view

    International Nuclear Information System (INIS)

    Dattoli, G.; Cesarano, C.; Lorenzutta, S.

    2000-01-01

    In this work it is applied the method of generating function, to introduce new forms of Bernoulli numbers and polynomials, which are exploited to derive further classes of partial sums involving generalized many index many variable polynomials. Analogous considerations are developed for the Euler numbers and polynomials [it

  12. Generalizations of an integral for Legendre polynomials by Persson and Strang

    NARCIS (Netherlands)

    Diekema, E.; Koornwinder, T.H.

    2012-01-01

    Persson and Strang (2003) evaluated the integral over [−1,1] of a squared odd degree Legendre polynomial divided by x2 as being equal to 2. We consider a similar integral for orthogonal polynomials with respect to a general even orthogonality measure, with Gegenbauer and Hermite polynomials as

  13. Bayesian inference of earthquake parameters from buoy data using a polynomial chaos-based surrogate

    KAUST Repository

    Giraldi, Loic; Le Maî tre, Olivier P.; Mandli, Kyle T.; Dawson, Clint N.; Hoteit, Ibrahim; Knio, Omar

    2017-01-01

    on polynomial chaos expansion to construct a surrogate model of the wave height at the buoy location. A correlated noise model is first proposed in order to represent the discrepancy between the computational model and the data. This step is necessary, as a

  14. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim

    2016-01-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard

  15. Eye aberration analysis with Zernike polynomials

    Science.gov (United States)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  16. Animating Nested Taylor Polynomials to Approximate a Function

    Science.gov (United States)

    Mazzone, Eric F.; Piper, Bruce R.

    2010-01-01

    The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…

  17. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  18. Polynomial representations of GLn

    CERN Document Server

    Green, James A; Erdmann, Karin

    2007-01-01

    The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.

  19. Polynomial representations of GLN

    CERN Document Server

    Green, James A

    1980-01-01

    The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.

  20. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  1. Structured Additive Regression Models: An R Interface to BayesX

    Directory of Open Access Journals (Sweden)

    Nikolaus Umlauf

    2015-02-01

    Full Text Available Structured additive regression (STAR models provide a flexible framework for model- ing possible nonlinear effects of covariates: They contain the well established frameworks of generalized linear models and generalized additive models as special cases but also allow a wider class of effects, e.g., for geographical or spatio-temporal data, allowing for specification of complex and realistic models. BayesX is standalone software package providing software for fitting general class of STAR models. Based on a comprehensive open-source regression toolbox written in C++, BayesX uses Bayesian inference for estimating STAR models based on Markov chain Monte Carlo simulation techniques, a mixed model representation of STAR models, or stepwise regression techniques combining penalized least squares estimation with model selection. BayesX not only covers models for responses from univariate exponential families, but also models from less-standard regression situations such as models for multi-categorical responses with either ordered or unordered categories, continuous time survival data, or continuous time multi-state models. This paper presents a new fully interactive R interface to BayesX: the R package R2BayesX. With the new package, STAR models can be conveniently specified using Rs formula language (with some extended terms, fitted using the BayesX binary, represented in R with objects of suitable classes, and finally printed/summarized/plotted. This makes BayesX much more accessible to users familiar with R and adds extensive graphics capabilities for visualizing fitted STAR models. Furthermore, R2BayesX complements the already impressive capabilities for semiparametric regression in R by a comprehensive toolbox comprising in particular more complex response types and alternative inferential procedures such as simulation-based Bayesian inference.

  2. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  3. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  5. H0 from cosmic chronometers and Type Ia supernovae, with Gaussian Processes and the novel Weighted Polynomial Regression method

    Science.gov (United States)

    Gómez-Valent, Adrià; Amendola, Luca

    2018-04-01

    In this paper we present new constraints on the Hubble parameter H0 using: (i) the available data on H(z) obtained from cosmic chronometers (CCH); (ii) the Hubble rate data points extracted from the supernovae of Type Ia (SnIa) of the Pantheon compilation and the Hubble Space Telescope (HST) CANDELS and CLASH Multy-Cycle Treasury (MCT) programs; and (iii) the local HST measurement of H0 provided by Riess et al. (2018), H0HST=(73.45±1.66) km/s/Mpc. Various determinations of H0 using the Gaussian processes (GPs) method and the most updated list of CCH data have been recently provided by Yu, Ratra & Wang (2018). Using the Gaussian kernel they find H0=(67.42± 4.75) km/s/Mpc. Here we extend their analysis to also include the most released and complete set of SnIa data, which allows us to reduce the uncertainty by a factor ~ 3 with respect to the result found by only considering the CCH information. We obtain H0=(67.06± 1.68) km/s/Mpc, which favors again the lower range of values for H0 and is in tension with H0HST. The tension reaches the 2.71σ level. We round off the GPs determination too by taking also into account the error propagation of the kernel hyperparameters when the CCH with and without H0HST are used in the analysis. In addition, we present a novel method to reconstruct functions from data, which consists in a weighted sum of polynomial regressions (WPR). We apply it from a cosmographic perspective to reconstruct H(z) and estimate H0 from CCH and SnIa measurements. The result obtained with this method, H0=(68.90± 1.96) km/s/Mpc, is fully compatible with the GPs ones. Finally, a more conservative GPs+WPR value is also provided, H0=(68.45± 2.00) km/s/Mpc, which is still almost 2σ away from H0HST.

  6. Complex centers of polynomial differential equations

    Directory of Open Access Journals (Sweden)

    Mohamad Ali M. Alwash

    2007-07-01

    Full Text Available We present some results on the existence and nonexistence of centers for polynomial first order ordinary differential equations with complex coefficients. In particular, we show that binomial differential equations without linear terms do not have complex centers. Classes of polynomial differential equations, with more than two terms, are presented that do not have complex centers. We also study the relation between complex centers and the Pugh problem. An algorithm is described to solve the Pugh problem for equations without complex centers. The method of proof involves phase plane analysis of the polar equations and a local study of periodic solutions.

  7. Differential recurrence formulae for orthogonal polynomials

    Directory of Open Access Journals (Sweden)

    Anton L. W. von Bachhaus

    1995-11-01

    Full Text Available Part I - By combining a general 2nd-order linear homogeneous ordinary differential equation with the three-term recurrence relation possessed by all orthogonal polynomials, it is shown that sequences of orthogonal polynomials which satisfy a differential equation of the above mentioned type necessarily have a differentiation formula of the type: gn(xY'n(x=fn(xYn(x+Yn-1(x. Part II - A recurrence formula of the form: rn(xY'n(x+sn(xY'n+1(x+tn(xY'n-1(x=0, is derived using the result of Part I.

  8. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  9. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  10. Open Problems Related to the Hurwitz Stability of Polynomials Segments

    Directory of Open Access Journals (Sweden)

    Baltazar Aguirre-Hernández

    2018-01-01

    Full Text Available In the framework of robust stability analysis of linear systems, the development of techniques and methods that help to obtain necessary and sufficient conditions to determine stability of convex combinations of polynomials is paramount. In this paper, knowing that Hurwitz polynomials set is not a convex set, a brief overview of some results and open problems concerning the stability of the convex combinations of Hurwitz polynomials is then provided.

  11. Supersymmetric quantum mechanics: Engineered hierarchies of integrable potentials and related orthogonal polynomials

    International Nuclear Information System (INIS)

    Balondo Iyela, Daddy; Govaerts, Jan; Hounkonnou, M. Norbert

    2013-01-01

    Within the context of supersymmetric quantum mechanics and its related hierarchies of integrable quantum Hamiltonians and potentials, a general programme is outlined and applied to its first two simplest illustrations. Going beyond the usual restriction of shape invariance for intertwined potentials, it is suggested to require a similar relation for Hamiltonians in the hierarchy separated by an arbitrary number of levels, N. By requiring further that these two Hamiltonians be in fact identical up to an overall shift in energy, a periodic structure is installed in the hierarchy which should allow for its resolution. Specific classes of orthogonal polynomials characteristic of such periodic hierarchies are thereby generated, while the methods of supersymmetric quantum mechanics then lead to generalised Rodrigues formulae and recursion relations for such polynomials. The approach also offers the practical prospect of quantum modelling through the engineering of quantum potentials from experimental energy spectra. In this paper, these ideas are presented and solved explicitly for the cases N= 1 and N= 2. The latter case is related to the generalised Laguerre polynomials, for which indeed new results are thereby obtained. In the context of dressing chains and deformed polynomial Heisenberg algebras, some partial results for N⩾ 3 also exist in the literature, which should be relevant to a complete study of the N⩾ 3 general periodic hierarchies

  12. Supersymmetric quantum mechanics: Engineered hierarchies of integrable potentials and related orthogonal polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Balondo Iyela, Daddy [International Chair in Mathematical Physics and Applications (ICMPA–UNESCO Chair), University of Abomey–Calavi, 072 B. P. 50 Cotonou, Republic of Benin (Benin); Centre for Cosmology, Particle Physics and Phenomenology (CP3), Institut de Recherche en Mathématique et Physique (IRMP), Université catholique de Louvain U.C.L., 2, Chemin du Cyclotron, B-1348 Louvain-la-Neuve (Belgium); Département de Physique, Université de Kinshasa (UNIKIN), B.P. 190 Kinshasa XI, Democratic Republic of Congo (Congo, The Democratic Republic of the); Govaerts, Jan [International Chair in Mathematical Physics and Applications (ICMPA–UNESCO Chair), University of Abomey–Calavi, 072 B. P. 50 Cotonou, Republic of Benin (Benin); Centre for Cosmology, Particle Physics and Phenomenology (CP3), Institut de Recherche en Mathématique et Physique (IRMP), Université catholique de Louvain U.C.L., 2, Chemin du Cyclotron, B-1348 Louvain-la-Neuve (Belgium); Hounkonnou, M. Norbert [International Chair in Mathematical Physics and Applications (ICMPA–UNESCO Chair), University of Abomey–Calavi, 072 B. P. 50 Cotonou, Republic of Benin (Benin)

    2013-09-15

    Within the context of supersymmetric quantum mechanics and its related hierarchies of integrable quantum Hamiltonians and potentials, a general programme is outlined and applied to its first two simplest illustrations. Going beyond the usual restriction of shape invariance for intertwined potentials, it is suggested to require a similar relation for Hamiltonians in the hierarchy separated by an arbitrary number of levels, N. By requiring further that these two Hamiltonians be in fact identical up to an overall shift in energy, a periodic structure is installed in the hierarchy which should allow for its resolution. Specific classes of orthogonal polynomials characteristic of such periodic hierarchies are thereby generated, while the methods of supersymmetric quantum mechanics then lead to generalised Rodrigues formulae and recursion relations for such polynomials. The approach also offers the practical prospect of quantum modelling through the engineering of quantum potentials from experimental energy spectra. In this paper, these ideas are presented and solved explicitly for the cases N= 1 and N= 2. The latter case is related to the generalised Laguerre polynomials, for which indeed new results are thereby obtained. In the context of dressing chains and deformed polynomial Heisenberg algebras, some partial results for N⩾ 3 also exist in the literature, which should be relevant to a complete study of the N⩾ 3 general periodic hierarchies.

  13. From Jack to Double Jack Polynomials via the Supersymmetric Bridge

    Science.gov (United States)

    Lapointe, Luc; Mathieu, Pierre

    2015-07-01

    The Calogero-Sutherland model occurs in a large number of physical contexts, either directly or via its eigenfunctions, the Jack polynomials. The supersymmetric counterpart of this model, although much less ubiquitous, has an equally rich structure. In particular, its eigenfunctions, the Jack superpolynomials, appear to share the very same remarkable combinatorial and structural properties as their non-supersymmetric version. These super-functions are parametrized by superpartitions with fixed bosonic and fermionic degrees. Now, a truly amazing feature pops out when the fermionic degree is sufficiently large: the Jack superpolynomials stabilize and factorize. Their stability is with respect to their expansion in terms of an elementary basis where, in the stable sector, the expansion coefficients become independent of the fermionic degree. Their factorization is seen when the fermionic variables are stripped off in a suitable way which results in a product of two ordinary Jack polynomials (somewhat modified by plethystic transformations), dubbed the double Jack polynomials. Here, in addition to spelling out these results, which were first obtained in the context of Macdonal superpolynomials, we provide a heuristic derivation of the Jack superpolynomial case by performing simple manipulations on the supersymmetric eigen-operators, rendering them independent of the number of particles and of the fermionic degree. In addition, we work out the expression of the Hamiltonian which characterizes the double Jacks. This Hamiltonian, which defines a new integrable system, involves not only the expected Calogero-Sutherland pieces but also combinations of the generators of an underlying affine {widehat{sl}_2} algebra.

  14. The computation of bond percolation critical polynomials by the deletion–contraction algorithm

    International Nuclear Information System (INIS)

    Scullard, Christian R

    2012-01-01

    Although every exactly known bond percolation critical threshold is the root in [0,1] of a lattice-dependent polynomial, it has recently been shown that the notion of a critical polynomial can be extended to any periodic lattice. The polynomial is computed on a finite subgraph, called the base, of an infinite lattice. For any problem with exactly known solution, the prediction of the bond threshold is always correct for any base containing an arbitrary number of unit cells. For unsolved problems, the polynomial is referred to as the generalized critical polynomial and provides an approximation that becomes more accurate with increasing number of bonds in the base, appearing to approach the exact answer. The polynomials are computed using the deletion–contraction algorithm, which quickly becomes intractable by hand for more than about 18 bonds. Here, I present generalized critical polynomials calculated with a computer program for bases of up to 36 bonds for all the unsolved Archimedean lattices, except the kagome lattice, which was considered in an earlier work. The polynomial estimates are generally within 10 −5 –10 −7 of the numerical values, but the prediction for the (4,8 2 ) lattice, though not exact, is not ruled out by simulations. (paper)

  15. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  16. The art of regression modeling in road safety

    CERN Document Server

    Hauer, Ezra

    2015-01-01

    This unique book explains how to fashion useful regression models from commonly available data to erect models essential for evidence-based road safety management and research. Composed from techniques and best practices presented over many years of lectures and workshops, The Art of Regression Modeling in Road Safety illustrates that fruitful modeling cannot be done without substantive knowledge about the modeled phenomenon. Class-tested in courses and workshops across North America, the book is ideal for professionals, researchers, university professors, and graduate students with an interest in, or responsibilities related to, road safety. This book also: · Presents for the first time a powerful analytical tool for road safety researchers and practitioners · Includes problems and solutions in each chapter as well as data and spreadsheets for running models and PowerPoint presentation slides · Features pedagogy well-suited for graduate courses and workshops including problems, solutions, and PowerPoint p...

  17. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  18. A high-order q-difference equation for q-Hahn multiple orthogonal polynomials

    DEFF Research Database (Denmark)

    Arvesú, J.; Esposito, Chiara

    2012-01-01

    A high-order linear q-difference equation with polynomial coefficients having q-Hahn multiple orthogonal polynomials as eigenfunctions is given. The order of the equation coincides with the number of orthogonality conditions that these polynomials satisfy. Some limiting situations when are studie....... Indeed, the difference equation for Hahn multiple orthogonal polynomials given in Lee [J. Approx. Theory (2007), ), doi: 10.1016/j.jat.2007.06.002] is obtained as a limiting case....

  19. Robust geographically weighted regression of modeling the Air Polluter Standard Index (APSI)

    Science.gov (United States)

    Warsito, Budi; Yasin, Hasbi; Ispriyanti, Dwi; Hoyyi, Abdul

    2018-05-01

    The Geographically Weighted Regression (GWR) model has been widely applied to many practical fields for exploring spatial heterogenity of a regression model. However, this method is inherently not robust to outliers. Outliers commonly exist in data sets and may lead to a distorted estimate of the underlying regression model. One of solution to handle the outliers in the regression model is to use the robust models. So this model was called Robust Geographically Weighted Regression (RGWR). This research aims to aid the government in the policy making process related to air pollution mitigation by developing a standard index model for air polluter (Air Polluter Standard Index - APSI) based on the RGWR approach. In this research, we also consider seven variables that are directly related to the air pollution level, which are the traffic velocity, the population density, the business center aspect, the air humidity, the wind velocity, the air temperature, and the area size of the urban forest. The best model is determined by the smallest AIC value. There are significance differences between Regression and RGWR in this case, but Basic GWR using the Gaussian kernel is the best model to modeling APSI because it has smallest AIC.

  20. On the Lorentz degree of a product of polynomials

    KAUST Repository

    Ait-Haddou, Rachid

    2015-01-01

    In this note, we negatively answer two questions of T. Erdélyi (1991, 2010) on possible lower bounds on the Lorentz degree of product of two polynomials. We show that the correctness of one question for degree two polynomials is a direct consequence

  1. Generalized Freud's equation and level densities with polynomial potential

    Science.gov (United States)

    Boobna, Akshat; Ghosh, Saugata

    2013-08-01

    We study orthogonal polynomials with weight $\\exp[-NV(x)]$, where $V(x)=\\sum_{k=1}^{d}a_{2k}x^{2k}/2k$ is a polynomial of order 2d. We derive the generalised Freud's equations for $d=3$, 4 and 5 and using this obtain $R_{\\mu}=h_{\\mu}/h_{\\mu -1}$, where $h_{\\mu}$ is the normalization constant for the corresponding orthogonal polynomials. Moments of the density functions, expressed in terms of $R_{\\mu}$, are obtained using Freud's equation and using this, explicit results of level densities as $N\\rightarrow\\infty$ are derived.

  2. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  3. Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights

    Science.gov (United States)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2009-12-01

    We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.

  4. Mandibulary dental arch form differences between level four polynomial method and pentamorphic pattern for normal occlusion sample

    Directory of Open Access Journals (Sweden)

    Y. Yuliana

    2011-07-01

    Full Text Available The aim of an orthodontic treatment is to achieve aesthetic, dental health and the surrounding tissues, occlusal functional relationship, and stability. The success of an orthodontic treatment is influenced by many factors, such as diagnosis and treatment plan. In order to do a diagnosis and a treatment plan, medical record, clinical examination, radiographic examination, extra oral and intra oral photos, as well as study model analysis are needed. The purpose of this study was to evaluate the differences in dental arch form between level four polynomial and pentamorphic arch form and to determine which one is best suitable for normal occlusion sample. This analytic comparative study was conducted at Faculty of Dentistry Universitas Padjadjaran on 13 models by comparing the dental arch form using the level four polynomial method based on mathematical calculations, the pattern of the pentamorphic arch and mandibular normal occlusion as a control. The results obtained were tested using statistical analysis T student test. The results indicate a significant difference both in the form of level four polynomial method and pentamorphic arch form when compared with mandibular normal occlusion dental arch form. Level four polynomial fits better, compare to pentamorphic arch form.

  5. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  6. A comparison of high-order polynomial and wave-based methods for Helmholtz problems

    Science.gov (United States)

    Lieu, Alice; Gabard, Gwénaël; Bériot, Hadrien

    2016-09-01

    The application of computational modelling to wave propagation problems is hindered by the dispersion error introduced by the discretisation. Two common strategies to address this issue are to use high-order polynomial shape functions (e.g. hp-FEM), or to use physics-based, or Trefftz, methods where the shape functions are local solutions of the problem (typically plane waves). Both strategies have been actively developed over the past decades and both have demonstrated their benefits compared to conventional finite-element methods, but they have yet to be compared. In this paper a high-order polynomial method (p-FEM with Lobatto polynomials) and the wave-based discontinuous Galerkin method are compared for two-dimensional Helmholtz problems. A number of different benchmark problems are used to perform a detailed and systematic assessment of the relative merits of these two methods in terms of interpolation properties, performance and conditioning. It is generally assumed that a wave-based method naturally provides better accuracy compared to polynomial methods since the plane waves or Bessel functions used in these methods are exact solutions of the Helmholtz equation. Results indicate that this expectation does not necessarily translate into a clear benefit, and that the differences in performance, accuracy and conditioning are more nuanced than generally assumed. The high-order polynomial method can in fact deliver comparable, and in some cases superior, performance compared to the wave-based DGM. In addition to benchmarking the intrinsic computational performance of these methods, a number of practical issues associated with realistic applications are also discussed.

  7. Some Results on the Independence Polynomial of Unicyclic Graphs

    Directory of Open Access Journals (Sweden)

    Oboudi Mohammad Reza

    2018-05-01

    Full Text Available Let G be a simple graph on n vertices. An independent set in a graph is a set of pairwise non-adjacent vertices. The independence polynomial of G is the polynomial I(G,x=∑k=0ns(G,kxk$I(G,x = \\sum\

  8. Limit cycles bifurcating from the periodic annulus of cubic homogeneous polynomial centers

    Directory of Open Access Journals (Sweden)

    Jaume Llibre

    2015-10-01

    Full Text Available We obtain an explicit polynomial whose simple positive real roots provide the limit cycles which bifurcate from the periodic orbits of any cubic homogeneous polynomial center when it is perturbed inside the class of all polynomial differential systems of degree n.

  9. Polynomial Poisson algebras: Gel'fand-Kirillov problem and Poisson spectra

    OpenAIRE

    Lecoutre, César

    2014-01-01

    We study the fields of fractions and the Poisson spectra of polynomial Poisson algebras.\\ud \\ud First we investigate a Poisson birational equivalence problem for polynomial Poisson algebras over a field of arbitrary characteristic. Namely, the quadratic Poisson Gel'fand-Kirillov problem asks whether the field of fractions of a Poisson algebra is isomorphic to the field of fractions of a Poisson affine space, i.e. a polynomial algebra such that the Poisson bracket of two generators is equal to...

  10. Exact Polynomial Eigenmodes for Homogeneous Spherical 3-Manifolds

    OpenAIRE

    Weeks, Jeffrey R.

    2005-01-01

    Observational data hints at a finite universe, with spherical manifolds such as the Poincare dodecahedral space tentatively providing the best fit. Simulating the physics of a model universe requires knowing the eigenmodes of the Laplace operator on the space. The present article provides explicit polynomial eigenmodes for all globally homogeneous 3-manifolds: the Poincare dodecahedral space S3/I*, the binary octahedral space S3/O*, the binary tetrahedral space S3/T*, the prism manifolds S3/D...

  11. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  12. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  13. On an Inequality Concerning the Polar Derivative of a Polynomial

    Indian Academy of Sciences (India)

    Abstract. In this paper, we present a correct proof of an -inequality concerning the polar derivative of a polynomial with restricted zeros. We also extend Zygmund's inequality to the polar derivative of a polynomial.

  14. Bayesian Inference of a Multivariate Regression Model

    Directory of Open Access Journals (Sweden)

    Marick S. Sinay

    2014-01-01

    Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.

  15. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  16. Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network

    Science.gov (United States)

    MolaAbasi, H.; Shooshpasha, I.

    2016-04-01

    The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.

  17. Classification of complex polynomial vector fields in one complex variable

    DEFF Research Database (Denmark)

    Branner, Bodil; Dias, Kealey

    2010-01-01

    This paper classifies the global structure of monic and centred one-variable complex polynomial vector fields. The classification is achieved by means of combinatorial and analytic data. More specifically, given a polynomial vector field, we construct a combinatorial invariant, describing...... the topology, and a set of analytic invariants, describing the geometry. Conversely, given admissible combinatorial and analytic data sets, we show using surgery the existence of a unique monic and centred polynomial vector field realizing the given invariants. This is the content of the Structure Theorem......, the main result of the paper. This result is an extension and refinement of Douady et al. (Champs de vecteurs polynomiaux sur C. Unpublished manuscript) classification of the structurally stable polynomial vector fields. We further review some general concepts for completeness and show that vector fields...

  18. Random polynomials and expected complexity of bisection methods for real solving

    DEFF Research Database (Denmark)

    Emiris, Ioannis Z.; Galligo, André; Tsigaridas, Elias

    2010-01-01

    , and by Edelman and Kostlan in order to estimate the real root separation of degree d polynomials with i.i.d. coefficients that follow two zero-mean normal distributions: for SO(2) polynomials, the i-th coefficient has variance (d/i), whereas for Weyl polynomials its variance is 1/i!. By applying results from....... The second part of the paper shows that the expected number of real roots of a degree d polynomial in the Bernstein basis is √2d ± O(1), when the coefficients are i.i.d. variables with moderate standard deviation. Our paper concludes with experimental results which corroborate our analysis....

  19. O(N) symmetries, sum rules for generalized Hermite polynomials and squeezed states

    International Nuclear Information System (INIS)

    Daboul, Jamil; Mizrahi, Salomon S

    2005-01-01

    Quantum optics has been dealing with coherent states, squeezed states and many other non-classical states. The associated mathematical framework makes use of special functions as Hermite polynomials, Laguerre polynomials and others. In this connection we here present some formal results that follow directly from the group O(N) of complex transformations. Motivated by the squeezed states structure, we introduce the generalized Hermite polynomials (GHP), which include as particular cases, the Hermite polynomials as well as the heat polynomials. Using generalized raising operators, we derive new sum rules for the GHP, which are covariant under O(N) transformations. The GHP and the associated sum rules become useful for evaluating Wigner functions in a straightforward manner. As a byproduct, we use one of these sum rules, on the operator level, to obtain raising and lowering operators for the Laguerre polynomials and show that they generate an sl(2, R) ≅ su(1, 1) algebra

  20. Euler polynomials and identities for non-commutative operators

    Science.gov (United States)

    De Angelis, Valerio; Vignat, Christophe

    2015-12-01

    Three kinds of identities involving non-commutating operators and Euler and Bernoulli polynomials are studied. The first identity, as given by Bender and Bettencourt [Phys. Rev. D 54(12), 7710-7723 (1996)], expresses the nested commutator of the Hamiltonian and momentum operators as the commutator of the momentum and the shifted Euler polynomial of the Hamiltonian. The second one, by Pain [J. Phys. A: Math. Theor. 46, 035304 (2013)], links the commutators and anti-commutators of the monomials of the position and momentum operators. The third appears in a work by Figuieira de Morisson and Fring [J. Phys. A: Math. Gen. 39, 9269 (2006)] in the context of non-Hermitian Hamiltonian systems. In each case, we provide several proofs and extensions of these identities that highlight the role of Euler and Bernoulli polynomials.