WorldWideScience

Sample records for weighted poisson regression

  1. PEMODELAN ANGKA KEMATIAN BAYI DENGAN PENDEKATAN GEOGRAPHICALLY WEIGHTED POISSON REGRESSION DI PROVINSI BALI

    Directory of Open Access Journals (Sweden)

    M ARRIE KUNILASARI ELYNA

    2012-09-01

    Full Text Available Alpha In this study the used method of Geographically Weighted Poisson Regression (GWPR is a statistical method to analyze the data to account for spatial factors. GWPR is a local form of Poisson regression with respect to the location of the assumption that the data is Poisson distributed. There are factors that are used in this study is the number of health facilities and midwives, the average length of breastfeeding, the percentage of deliveries performed by non-medical assistance, and the average length of schooling a woman is married. The research results showed that factors significantly influence the number of infant deaths in sluruh districts / municipalities in Bali is the average length of schooling a woman is married. Then the results of hypothesis test obtained the results that there was no difference who significant between the regression model poisson and GWPR in Bali.

  2. PEMODELAN JUMLAH ANAK PUTUS SEKOLAH DI PROVINSI BALI DENGAN PENDEKATAN SEMI-PARAMETRIC GEOGRAPHICALLY WEIGHTED POISSON REGRESSION

    Directory of Open Access Journals (Sweden)

    GUSTI AYU RATIH ASTARI

    2013-11-01

    Full Text Available Dropout number is one of the important indicators to measure the human progress resources in education sector. This research uses the approaches of Semi-parametric Geographically Weighted Poisson Regression to get the best model and to determine the influencing factors of dropout number for primary education in Bali. The analysis results show that there are no significant differences between the Poisson regression model with GWPR and Semi-parametric GWPR. Factors which significantly influence the dropout number for primary education in Bali are the ratio of students to school, ratio of students to teachers, the number of families with the latest educational fathers is elementary or junior high school, illiteracy rates, and the average number of family members.

  3. Geographically weighted poisson regression semiparametric on modeling of the number of tuberculosis cases (Case study: Bandung city)

    Science.gov (United States)

    Octavianty, Toharudin, Toni; Jaya, I. G. N. Mindra

    2017-03-01

    Tuberculosis (TB) is a disease caused by a bacterium, called Mycobacterium tuberculosis, which typically attacks the lungs but can also affect the kidney, spine, and brain (Centers for Disease Control and Prevention). Indonesia had the largest number of TB cases after India (Global Tuberculosis Report 2015 by WHO). The distribution of Mycobacterium tuberculosis genotypes in Indonesia showed the high genetic diversity and tended to vary by geographic regions. For instance, in Bandung city, the prevalence rate of TB morbidity is quite high. A number of TB patients belong to the counted data. To determine the factors that significantly influence the number of tuberculosis patients in each location of the observations can be used statistical analysis tool that is Geographically Weighted Poisson Regression Semiparametric (GWPRS). GWPRS is an extension of the Poisson regression and GWPR that is influenced by geographical factors, and there is also variables that influence globally and locally. Using the TB Data in Bandung city (in 2015), the results show that the global and local variables that influence the number of tuberculosis patients in every sub-district.

  4. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  5. Bayesian regression of piecewise homogeneous Poisson processes

    Directory of Open Access Journals (Sweden)

    Diego Sevilla

    2015-12-01

    Full Text Available In this paper, a Bayesian method for piecewise regression is adapted to handle counting processes data distributed as Poisson. A numerical code in Mathematica is developed and tested analyzing simulated data. The resulting method is valuable for detecting breaking points in the count rate of time series for Poisson processes. Received: 2 November 2015, Accepted: 27 November 2015; Edited by: R. Dickman; Reviewed by: M. Hutter, Australian National University, Canberra, Australia.; DOI: http://dx.doi.org/10.4279/PIP.070018 Cite as: D J R Sevilla, Papers in Physics 7, 070018 (2015

  6. Poisson Regression Analysis of Illness and Injury Surveillance Data

    Energy Technology Data Exchange (ETDEWEB)

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson

  7. Reducing Poisson noise and baseline drift in X-ray spectral images with bootstrap Poisson regression and robust nonparametric regression

    CERN Document Server

    Zhu, Feng; Feng, Weiyue; Wang, Huajian; Huang, Shaosen; Lv, Yisong; Chen, Yong

    2013-01-01

    X-ray spectral imaging provides quantitative imaging of trace elements in biological sample with high sensitivity. We propose a novel algorithm to promote the signal-to-noise ratio (SNR) of X-ray spectral images that have low photon counts. Firstly, we estimate the image data area that belongs to the homogeneous parts through confidence interval testing. Then, we apply the Poisson regression through its maximum likelihood estimation on this area to estimate the true photon counts from the Poisson noise corrupted data. Unlike other denoising methods based on regression analysis, we use the bootstrap resampling methods to ensure the accuracy of regression estimation. Finally, we use a robust local nonparametric regression method to estimate the baseline and subsequently subtract it from the X-ray spectral data to further improve the SNR of the data. Experiments on several real samples show that the proposed method performs better than some state-of-the-art approaches to ensure accuracy and precision for quantit...

  8. Modeling the number of car theft using Poisson regression

    Science.gov (United States)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  9. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record

  10. Correlation Weights in Multiple Regression

    Science.gov (United States)

    Waller, Niels G.; Jones, Jeff A.

    2010-01-01

    A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…

  11. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  12. A SAS-macro for estimation of the cumulative incidence using Poisson regression

    DEFF Research Database (Denmark)

    Waltoft, Berit Lindum

    2009-01-01

    the hazard rates, and the hazard rates are often estimated by the Cox regression. This procedure may not be suitable for large studies due to limited computer resources. Instead one uses Poisson regression, which approximates the Cox regression. Rosthøj et al. presented a SAS-macro for the estimation...... of the cumulative incidences based on the Cox regression. I present the functional form of the probabilities and variances when using piecewise constant hazard rates and a SAS-macro for the estimation using Poisson regression. The use of the macro is demonstrated through examples and compared to the macro presented...

  13. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  14. Analysing count data of Butterflies communities in Jasin, Melaka: A Poisson regression analysis

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah

    2017-09-01

    Counting outcomes normally have remaining values highly skewed toward the right as they are often characterized by large values of zeros. The data of butterfly communities, had been taken from Jasin, Melaka and consists of 131 number of subject visits in Jasin, Melaka. In this paper, considering the count data of butterfly communities, an analysis is considered Poisson regression analysis as it is assumed to be an alternative way on better suited to the counting process. This research paper is about analysing count data from zero observation ecological inference of butterfly communities in Jasin, Melaka by using Poisson regression analysis. The software for Poisson regression is readily available and it is becoming more widely used in many field of research and the data was analysed by using SAS software. The purpose of analysis comprised the framework of identifying the concerns. Besides, by using Poisson regression analysis, the study determines the fitness of data for accessing the reliability on using the count data. The finding indicates that the highest and lowest number of subject comes from the third family (Nymphalidae) family and fifth (Hesperidae) family and the Poisson distribution seems to fit the zero values.

  15. Detecting overdispersion in count data: A zero-inflated Poisson regression analysis

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah

    2017-09-01

    This study focusing on analysing count data of butterflies communities in Jasin, Melaka. In analysing count dependent variable, the Poisson regression model has been known as a benchmark model for regression analysis. Continuing from the previous literature that used Poisson regression analysis, this study comprising the used of zero-inflated Poisson (ZIP) regression analysis to gain acute precision on analysing the count data of butterfly communities in Jasin, Melaka. On the other hands, Poisson regression should be abandoned in the favour of count data models, which are capable of taking into account the extra zeros explicitly. By far, one of the most popular models include ZIP regression model. The data of butterfly communities which had been called as the number of subjects in this study had been taken in Jasin, Melaka and consisted of 131 number of subjects visits Jasin, Melaka. Since the researchers are considering the number of subjects, this data set consists of five families of butterfly and represent the five variables involve in the analysis which are the types of subjects. Besides, the analysis of ZIP used the SAS procedure of overdispersion in analysing zeros value and the main purpose of continuing the previous study is to compare which models would be better than when exists zero values for the observation of the count data. The analysis used AIC, BIC and Voung test of 5% level significance in order to achieve the objectives. The finding indicates that there is a presence of over-dispersion in analysing zero value. The ZIP regression model is better than Poisson regression model when zero values exist.

  16. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    Science.gov (United States)

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs.

  17. A poisson regression approach for modelling spatial autocorrelation between geographically referenced observations

    Directory of Open Access Journals (Sweden)

    Jolley Damien

    2011-10-01

    Full Text Available Abstract Background Analytic methods commonly used in epidemiology do not account for spatial correlation between observations. In regression analyses, omission of that autocorrelation can bias parameter estimates and yield incorrect standard error estimates. Methods We used age standardised incidence ratios (SIRs of esophageal cancer (EC from the Babol cancer registry from 2001 to 2005, and extracted socioeconomic indices from the Statistical Centre of Iran. The following models for SIR were used: (1 Poisson regression with agglomeration-specific nonspatial random effects; (2 Poisson regression with agglomeration-specific spatial random effects. Distance-based and neighbourhood-based autocorrelation structures were used for defining the spatial random effects and a pseudolikelihood approach was applied to estimate model parameters. The Bayesian information criterion (BIC, Akaike's information criterion (AIC and adjusted pseudo R2, were used for model comparison. Results A Gaussian semivariogram with an effective range of 225 km best fit spatial autocorrelation in agglomeration-level EC incidence. The Moran's I index was greater than its expected value indicating systematic geographical clustering of EC. The distance-based and neighbourhood-based Poisson regression estimates were generally similar. When residual spatial dependence was modelled, point and interval estimates of covariate effects were different to those obtained from the nonspatial Poisson model. Conclusions The spatial pattern evident in the EC SIR and the observation that point estimates and standard errors differed depending on the modelling approach indicate the importance of accounting for residual spatial correlation in analyses of EC incidence in the Caspian region of Iran. Our results also illustrate that spatial smoothing must be applied with care.

  18. Breaking the waves: a poisson regression approach to Schumpeterian clustering of basic innovations

    OpenAIRE

    Silverberg, G.P.; Verspagen, B.

    2000-01-01

    The Schumpeterian theory of long waves has given rise to an intense debate on the existenceof clusters of basic innovations. Silverberg and Lehnert have criticized the empirical part ofthis literature on several methodological accounts. In this paper, we propose the methodologyof Poisson regression as a logical way to incorporate this criticism. We construct a new timeseries for basic innovations (based on previously used time series), and use this to test thehypothesis that basic innovations...

  19. Comment on Zigerell (2015: Using Poisson inverse Gaussian regression on citation data

    Directory of Open Access Journals (Sweden)

    Brian J Fogarty

    2015-11-01

    Full Text Available Zigerell’s recent article in Research & Politics argues that Maliniak et al.’s findings that women are cited less than men in international relations are not robust to alternative specifications. Using Poisson inverse Gaussian (PIG regression, in this comment, I demonstrate that both papers’ findings are sensitive to empirical specifications. In many model specifications, women are cited less than men, but other specifications either show a null effect or that women are actually cited more than men in international relations. This illustrates that substantive model selections matter a great deal for the conclusions we can make from our data.

  20. Use of Poisson spatiotemporal regression models for the Brazilian Amazon Forest: malaria count data.

    Science.gov (United States)

    Achcar, Jorge Alberto; Martinez, Edson Zangiacomi; Souza, Aparecida Doniseti Pires de; Tachibana, Vilma Mayumi; Flores, Edilson Ferreira

    2011-01-01

    Malaria is a serious problem in the Brazilian Amazon region, and the detection of possible risk factors could be of great interest for public health authorities. The objective of this article was to investigate the association between environmental variables and the yearly registers of malaria in the Amazon region using bayesian spatiotemporal methods. We used Poisson spatiotemporal regression models to analyze the Brazilian Amazon forest malaria count for the period from 1999 to 2008. In this study, we included some covariates that could be important in the yearly prediction of malaria, such as deforestation rate. We obtained the inferences using a bayesian approach and Markov Chain Monte Carlo (MCMC) methods to simulate samples for the joint posterior distribution of interest. The discrimination of different models was also discussed. The model proposed here suggests that deforestation rate, the number of inhabitants per km², and the human development index (HDI) are important in the prediction of malaria cases. It is possible to conclude that human development, population growth, deforestation, and their associated ecological alterations are conducive to increasing malaria risk. We conclude that the use of Poisson regression models that capture the spatial and temporal effects under the bayesian paradigm is a good strategy for modeling malaria counts.

  1. Use of Poisson spatiotemporal regression models for the Brazilian Amazon Forest: malaria count data

    Directory of Open Access Journals (Sweden)

    Jorge Alberto Achcar

    2011-12-01

    Full Text Available INTRODUCTION: Malaria is a serious problem in the Brazilian Amazon region, and the detection of possible risk factors could be of great interest for public health authorities. The objective of this article was to investigate the association between environmental variables and the yearly registers of malaria in the Amazon region using Bayesian spatiotemporal methods. METHODS: We used Poisson spatiotemporal regression models to analyze the Brazilian Amazon forest malaria count for the period from 1999 to 2008. In this study, we included some covariates that could be important in the yearly prediction of malaria, such as deforestation rate. We obtained the inferences using a Bayesian approach and Markov Chain Monte Carlo (MCMC methods to simulate samples for the joint posterior distribution of interest. The discrimination of different models was also discussed. RESULTS: The model proposed here suggests that deforestation rate, the number of inhabitants per km², and the human development index (HDI are important in the prediction of malaria cases. CONCLUSIONS: It is possible to conclude that human development, population growth, deforestation, and their associated ecological alterations are conducive to increasing malaria risk. We conclude that the use of Poisson regression models that capture the spatial and temporal effects under the Bayesian paradigm is a good strategy for modeling malaria counts.

  2. Poisson-weighted Lindley distribution and its application on insurance claim data

    Science.gov (United States)

    Manesh, Somayeh Nik; Hamzah, Nor Aishah; Zamani, Hossein

    2014-07-01

    This paper introduces a new two-parameter mixed Poisson distribution, namely the Poisson-weighted Lindley (P-WL), which is obtained by mixing the Poisson with a new class of weighted Lindley distributions. The closed form, the moment generating function and the probability generating function are derived. The parameter estimations methods of moments and the maximum likelihood procedure are provided. Some simulation studies are conducted to investigate the performance of P-WL distribution. In addition, the compound P-WL distribution is derived and some applications to insurance area based on observations of the number of claims and on observations of the total amount of claims incurred will be illustrated.

  3. Climate changes and their effects in the public health: use of poisson regression models

    Directory of Open Access Journals (Sweden)

    Jonas Bodini Alonso

    2010-08-01

    Full Text Available In this paper, we analyze the daily number of hospitalizations in São Paulo City, Brazil, in the period of January 01, 2002 to December 31, 2005. This data set relates to pneumonia, coronary ischemic diseases, diabetes and chronic diseases in different age categories. In order to verify the effect of climate changes the following covariates are considered: atmosphere pressure, air humidity, temperature, year season and also a covariate related to the week day when the hospitalization occurred. The possible effects of the assumed covariates in the number of hospitalization are studied using a Poisson regression model in the presence or not of a random effect which captures the possible correlation among the hospitalization accounting for the different age categories in the same day and the extra-Poisson variability for the longitudinal data. The inferences of interest are obtained using the Bayesian paradigm and MCMC (Markov chain Monte Carlo methods.Neste artigo, analisamos os dados relativos aos números diários de hospitalizações na cidade de São Paulo, Brasil no período de 01/01/2002 a 31/12/2005 devido a pneumonia, doenças isquêmicas, diabetes e doenças crônicas e de acordo com a faixa etária. Com o objetivo de estudar o efeito de mudanças climáticas são consideradas algumas covariáveis climáticas os índices diários de pressão atmosférica, umidade do ar, temperatura e estação do ano, e uma covariável relacionada ao dia da semana da ocorrência de hospitalização. Para verificar os efeitos das covariáveis nas respostas dadas pelo numero de hospitalizações, consideramos um modelo de regressão de Poisson na presença ou não de um efeito aleatório que captura a possível correlação entre as contagens para as faixas etárias de um mesmo dia e a variabilidade extra-poisson para os dados longitudinais. As inferências de interesse são obtidas usando o paradigma bayesiano e métodos de simulação MCMC (Monte Carlo

  4. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    Directory of Open Access Journals (Sweden)

    Rodrigues-Motta Mariana

    2008-07-01

    Full Text Available Abstract Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep.

  5. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep.

    Science.gov (United States)

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep.

  6. Non-Poisson processes: regression to equilibrium versus equilibrium correlation functions

    Science.gov (United States)

    Allegrini, Paolo; Grigolini, Paolo; Palatella, Luigi; Rosa, Angelo; West, Bruce J.

    2005-03-01

    We study the response to perturbation of non-Poisson dichotomous fluctuations that generate super-diffusion. We adopt the Liouville perspective and with it a quantum-like approach based on splitting the density distribution into a symmetric and an anti-symmetric component. To accomodate the equilibrium condition behind the stationary correlation function, we study the time evolution of the anti-symmetric component, while keeping the symmetric component at equilibrium. For any realistic form of the perturbed distribution density we expect a breakdown of the Onsager principle, namely, of the property that the subsequent regression of the perturbation to equilibrium is identical to the corresponding equilibrium correlation function. We find the directions to follow for the calculation of higher-order correlation functions, an unsettled problem, which has been addressed in the past by means of approximations yielding quite different physical effects.

  7. Risk factor selection in rate making: EM adaptive LASSO for zero-inflated poisson regression models.

    Science.gov (United States)

    Tang, Yanlin; Xiang, Liya; Zhu, Zhongyi

    2014-06-01

    Risk factor selection is very important in the insurance industry, which helps precise rate making and studying the features of high-quality insureds. Zero-inflated data are common in insurance, such as the claim frequency data, and zero-inflation makes the selection of risk factors quite difficult. In this article, we propose a new risk factor selection approach, EM adaptive LASSO, for a zero-inflated Poisson regression model, which combines the EM algorithm and adaptive LASSO penalty. Under some regularity conditions, we show that, with probability approaching 1, important factors are selected and the redundant factors are excluded. We investigate the finite sample performance of the proposed method through a simulation study and the analysis of car insurance data from SAS Enterprise Miner database.

  8. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    Science.gov (United States)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  9. Penalized Weighted Least Squares for Outlier Detection and Robust Regression

    OpenAIRE

    Gao, Xiaoli; Fang, Yixin

    2016-01-01

    To conduct regression analysis for data contaminated with outliers, many approaches have been proposed for simultaneous outlier detection and robust regression, so is the approach proposed in this manuscript. This new approach is called "penalized weighted least squares" (PWLS). By assigning each observation an individual weight and incorporating a lasso-type penalty on the log-transformation of the weight vector, the PWLS is able to perform outlier detection and robust regression simultaneou...

  10. The properties of the geometric-Poisson exponentially weighted moving control chart with estimated parameters

    Directory of Open Access Journals (Sweden)

    Aamir Saghir

    2015-12-01

    Full Text Available The geometric-Poisson exponentially weighted moving average (EWMA chart has been shown to be more effective than the Poisson EWMA chart in monitoring the number of defects in the production processes. In these applications, it is assumed that the process parameters are known or have been accurately estimated. However, in practice, the process parameters are rarely known and must be estimated from reference sample to construct the geometric-Poisson EWMA chart. The performance of the given chart, due to variability in the parameter estimation, might differ from known parameters’ case. This article explored the effect of estimated parameters on the conditional and marginal performance of the geometric-Poisson EWMA chart. The run length characteristics are calculated using a Markov chain approach and the effect of estimation on the performance of the given chart is shown to be significant. Recommendations about the proposer choice of sample size, smoothing constant, and dispersion parameter are made. Results of this study highlight the practical implications of estimation error, and to offer advice to practitioners when constructing/analyzing a phase-I sample.

  11. Individual patient data meta-analysis of survival data using Poisson regression models

    Directory of Open Access Journals (Sweden)

    Crowther Michael J

    2012-03-01

    Full Text Available Abstract Background An Individual Patient Data (IPD meta-analysis is often considered the gold-standard for synthesising survival data from clinical trials. An IPD meta-analysis can be achieved by either a two-stage or a one-stage approach, depending on whether the trials are analysed separately or simultaneously. A range of one-stage hierarchical Cox models have been previously proposed, but these are known to be computationally intensive and are not currently available in all standard statistical software. We describe an alternative approach using Poisson based Generalised Linear Models (GLMs. Methods We illustrate, through application and simulation, the Poisson approach both classically and in a Bayesian framework, in two-stage and one-stage approaches. We outline the benefits of our one-stage approach through extension to modelling treatment-covariate interactions and non-proportional hazards. Ten trials of hypertension treatment, with all-cause death the outcome of interest, are used to apply and assess the approach. Results We show that the Poisson approach obtains almost identical estimates to the Cox model, is additionally computationally efficient and directly estimates the baseline hazard. Some downward bias is observed in classical estimates of the heterogeneity in the treatment effect, with improved performance from the Bayesian approach. Conclusion Our approach provides a highly flexible and computationally efficient framework, available in all standard statistical software, to the investigation of not only heterogeneity, but the presence of non-proportional hazards and treatment effect modifiers.

  12. A Bayesian destructive weighted Poisson cure rate model and an application to a cutaneous melanoma data.

    Science.gov (United States)

    Rodrigues, Josemar; Cancho, Vicente G; de Castro, Mário; Balakrishnan, N

    2012-12-01

    In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis--latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.

  13. Robust Depth-Weighted Wavelet for Nonparametric Regression Models

    Institute of Scientific and Technical Information of China (English)

    Lu LIN

    2005-01-01

    In the nonpaxametric regression models, the original regression estimators including kernel estimator, Fourier series estimator and wavelet estimator are always constructed by the weighted sum of data, and the weights depend only on the distance between the design points and estimation points. As a result these estimators are not robust to the perturbations in data. In order to avoid this problem, a new nonparametric regression model, called the depth-weighted regression model, is introduced and then the depth-weighted wavelet estimation is defined. The new estimation is robust to the perturbations in data, which attains very high breakdown value close to 1/2. On the other hand, some asymptotic behaviours such as asymptotic normality are obtained. Some simulations illustrate that the proposed wavelet estimator is more robust than the original wavelet estimator and, as a price to pay for the robustness, the new method is slightly less efficient than the original method.

  14. Multilevel poisson regression modelling for determining factors of dengue fever cases in bandung

    Science.gov (United States)

    Arundina, Davila Rubianti; Tantular, Bertho; Pontoh, Resa Septiani

    2017-03-01

    Scralatina or Dengue Fever is a kind of fever caused by serotype virus which Flavivirus genus and be known as Dengue Virus. Dengue Fever caused by Aedes Aegipty Mosquito bites who infected by a dengue virus. The study was conducted in 151 villages in Bandung. Health Analysts believes that there are two factors that affect the dengue cases, Internal factor (individual) and external factor (environment). The data who used in this research is hierarchical data. The method is used for hierarchical data modelling is multilevel method. Which is, the level 1 is village and level 2 is sub-district. According exploration data analysis, the suitable Multilevel Method is Random Intercept Model. Penalized Quasi Likelihood (PQL) approach on multilevel Poisson is a proper analysis to determine factors that affecting dengue cases in the city of Bandung. Clean and Healthy Behavior factor from the village level have an effect on the number of cases of dengue fever in the city of Bandung. Factor from the sub-district level has no effect.

  15. Use of probabilistic weights to enhance linear regression myoelectric control

    Science.gov (United States)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  16. Use of probabilistic weights to enhance linear regression myoelectric control.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2015-12-01

    Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts' law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p linear regression control. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  17. The analysis of incontinence episodes and other count data in patients with overactive bladder by Poisson and negative binomial regression.

    Science.gov (United States)

    Martina, R; Kay, R; van Maanen, R; Ridder, A

    2015-01-01

    Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well.

  18. Neither fixed nor random: weighted least squares meta-regression.

    Science.gov (United States)

    Stanley, T D; Doucouliagos, Hristos

    2016-06-20

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Geographically weighted regression and multicollinearity: dispelling the myth

    Science.gov (United States)

    Fotheringham, A. Stewart; Oshan, Taylor M.

    2016-10-01

    Geographically weighted regression (GWR) extends the familiar regression framework by estimating a set of parameters for any number of locations within a study area, rather than producing a single parameter estimate for each relationship specified in the model. Recent literature has suggested that GWR is highly susceptible to the effects of multicollinearity between explanatory variables and has proposed a series of local measures of multicollinearity as an indicator of potential problems. In this paper, we employ a controlled simulation to demonstrate that GWR is in fact very robust to the effects of multicollinearity. Consequently, the contention that GWR is highly susceptible to multicollinearity issues needs rethinking.

  20. Modelling the influence of temperature and rainfall on malaria incidence in four endemic provinces of Zambia using semiparametric Poisson regression.

    Science.gov (United States)

    Shimaponda-Mataa, Nzooma M; Tembo-Mwase, Enala; Gebreslasie, Michael; Achia, Thomas N O; Mukaratirwa, Samson

    2017-02-01

    Although malaria morbidity and mortality are greatly reduced globally owing to great control efforts, the disease remains the main contributor. In Zambia, all provinces are malaria endemic. However, the transmission intensities vary mainly depending on environmental factors as they interact with the vectors. Generally in Africa, possibly due to the varying perspectives and methods used, there is variation on the relative importance of malaria risk determinants. In Zambia, the role climatic factors play on malaria case rates has not been determined in combination of space and time using robust methods in modelling. This is critical considering the reversal in malaria reduction after the year 2010 and the variation by transmission zones. Using a geoadditive or structured additive semiparametric Poisson regression model, we determined the influence of climatic factors on malaria incidence in four endemic provinces of Zambia. We demonstrate a strong positive association between malaria incidence and precipitation as well as minimum temperature. The risk of malaria was 95% lower in Lusaka (ARR=0.05, 95% CI=0.04-0.06) and 68% lower in the Western Province (ARR=0.31, 95% CI=0.25-0.41) compared to Luapula Province. North-western Province did not vary from Luapula Province. The effects of geographical region are clearly demonstrated by the unique behaviour and effects of minimum and maximum temperatures in the four provinces. Environmental factors such as landscape in urbanised places may also be playing a role.

  1. Approximation by randomly weighting method in censored regression model

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Censored regression ("Tobit") models have been in common use, and their linear hypothesis testings have been widely studied. However, the critical values of these tests are usually related to quantities of an unknown error distribution and estimators of nuisance parameters. In this paper, we propose a randomly weighting test statistic and take its conditional distribution as an approximation to null distribution of the test statistic. It is shown that, under both the null and local alternative hypotheses, conditionally asymptotic distribution of the randomly weighting test statistic is the same as the null distribution of the test statistic. Therefore, the critical values of the test statistic can be obtained by randomly weighting method without estimating the nuisance parameters. At the same time, we also achieve the weak consistency and asymptotic normality of the randomly weighting least absolute deviation estimate in censored regression model. Simulation studies illustrate that the per-formance of our proposed resampling test method is better than that of central chi-square distribution under the null hypothesis.

  2. Approximation by randomly weighting method in censored regression model

    Institute of Scientific and Technical Information of China (English)

    WANG ZhanFeng; WU YaoHua; ZHAO LinCheng

    2009-01-01

    Censored regression ("Tobit") models have been in common use,and their linear hypothesis testings have been widely studied.However,the critical values of these tests are usually related to quantities of an unknown error distribution and estimators of nuisance parameters.In this paper,we propose a randomly weighting test statistic and take its conditional distribution as an approximation to null distribution of the test statistic.It is shown that,under both the null and local alternative hypotheses,conditionally asymptotic distribution of the randomly weighting test statistic is the same as the null distribution of the test statistic.Therefore,the critical values of the test statistic can be obtained by randomly weighting method without estimating the nuisance parameters.At the same time,we also achieve the weak consistency and asymptotic normality of the randomly weighting least absolute deviation estimate in censored regression model.Simulation studies illustrate that the performance of our proposed resampling test method is better than that of central chi-square distribution under the null hypothesis.

  3. Multicollinearity and correlation among local regression coefficients in geographically weighted regression

    Science.gov (United States)

    Wheeler, David; Tiefelsdorf, Michael

    2005-06-01

    Present methodological research on geographically weighted regression (GWR) focuses primarily on extensions of the basic GWR model, while ignoring well-established diagnostics tests commonly used in standard global regression analysis. This paper investigates multicollinearity issues surrounding the local GWR coefficients at a single location and the overall correlation between GWR coefficients associated with two different exogenous variables. Results indicate that the local regression coefficients are potentially collinear even if the underlying exogenous variables in the data generating process are uncorrelated. Based on these findings, applied GWR research should practice caution in substantively interpreting the spatial patterns of local GWR coefficients. An empirical disease-mapping example is used to motivate the GWR multicollinearity problem. Controlled experiments are performed to systematically explore coefficient dependency issues in GWR. These experiments specify global models that use eigenvectors from a spatial link matrix as exogenous variables.

  4. Determination of riverbank erosion probability using Locally Weighted Logistic Regression

    Science.gov (United States)

    Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos

    2015-04-01

    Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The

  5. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  6. Comparing the cancer in Ninawa during three periods (1980-1990, 1991-2000, 2001-2010 using Poisson regression

    Directory of Open Access Journals (Sweden)

    Muzahem Mohammed Yahya AL-Hashimi

    2013-01-01

    Full Text Available Background: Iraq fought three wars in three consecutive decades, Iran-Iraq war (1980-1988, Persian Gulf War in 1991, and the Iraq′s war in 2003. In the nineties of the last century and up to the present time, there have been anecdotal reports of increase in cancer in Ninawa as in all provinces of Iraq, possibly as a result of exposure to depleted uranium used by American troops in the last two wars. This paper deals with cancer incidence in Ninawa, the most importance province in Iraq, where many of her sons were soldiers in the Iraqi army, and they have participated in the wars. Materials and Methods: The data was derived from the Directorate of Health in Ninawa. The data was divided into three sub periods: 1980-1990, 1991-2000, and 2001-2010. The analyses are performed using Poisson regressions. The response variable is the cancer incidence number. Cancer cases, age, sex, and years were considered as the explanatory variables. The logarithm of the population of Ninawa is used as an offset. The aim of this paper is to model the cancer incidence data and estimate the cancer incidence rate ratio (IRR to illustrate the changes that have occurred of incidence cancer in Ninawa in these three periods. Results: There is evidence of a reduction in the cancer IRR in Ninawa in the third period as well as in the second period. Our analyses found that breast cancer remained the first common cancer; while the lung, trachea, and bronchus the second in spite of decreasing as dramatically. Modest increases in incidence of prostate, penis, and other male genitals for the duration of the study period and stability in incidence of colon in the second and third periods. Modest increases in incidence of placenta and metastatic tumors, while the highest increase was in leukemia in the third period relates to the second period but not to the first period. The cancer IRR in men was decreased from more than 33% than those of females in the first period, more than 39

  7. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  8. [Spatial interpolation of soil organic matter using regression Kriging and geographically weighted regression Kriging].

    Science.gov (United States)

    Yang, Shun-hua; Zhang, Hai-tao; Guo, Long; Ren, Yan

    2015-06-01

    Relative elevation and stream power index were selected as auxiliary variables based on correlation analysis for mapping soil organic matter. Geographically weighted regression Kriging (GWRK) and regression Kriging (RK) were used for spatial interpolation of soil organic matter and compared with ordinary Kriging (OK), which acts as a control. The results indicated that soil or- ganic matter was significantly positively correlated with relative elevation whilst it had a significantly negative correlation with stream power index. Semivariance analysis showed that both soil organic matter content and its residuals (including ordinary least square regression residual and GWR resi- dual) had strong spatial autocorrelation. Interpolation accuracies by different methods were esti- mated based on a data set of 98 validation samples. Results showed that the mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) of RK were respectively 39.2%, 17.7% and 20.6% lower than the corresponding values of OK, with a relative-improvement (RI) of 20.63. GWRK showed a similar tendency, having its ME, MAE and RMSE to be respectively 60.6%, 23.7% and 27.6% lower than those of OK, with a RI of 59.79. Therefore, both RK and GWRK significantly improved the accuracy of OK interpolation of soil organic matter due to their in- corporation of auxiliary variables. In addition, GWRK performed obviously better than RK did in this study, and its improved performance should be attributed to the consideration of sample spatial locations.

  9. Alcohol outlet density and violence: A geographically weighted regression approach.

    Science.gov (United States)

    Cameron, Michael P; Cochrane, William; Gordon, Craig; Livingston, Michael

    2016-05-01

    We investigate the relationship between outlet density (of different types) and violence (as measured by police activity) across the North Island of New Zealand, specifically looking at whether the relationships vary spatially. We use New Zealand data at the census area unit (approximately suburb) level, on police-attended violent incidents and outlet density (by type of outlet), controlling for population density and local social deprivation. We employed geographically weighted regression to obtain both global average and locally specific estimates of the relationships between alcohol outlet density and violence. We find that bar and night club density, and licensed club density (e.g. sports clubs) have statistically significant and positive relationships with violence, with an additional bar or night club is associated with nearly 5.3 additional violent events per year, and an additional licensed club associated with 0.8 additional violent events per year. These relationships do not show significant spatial variation. In contrast, the effects of off-licence density and restaurant/café density do exhibit significant spatial variation. However, the non-varying effects of bar and night club density are larger than the locally specific effects of other outlet types. The relationships between outlet density and violence vary significantly across space for off-licences and restaurants/cafés. These results suggest that in order to minimise alcohol-related harms, such as violence, locally specific policy interventions are likely to be necessary. [Cameron MP, Cochrane W, Gordon C, Livingston M. Alcohol outlet density and violence: A geographically weighted regression approach. Drug Alcohol Rev 2016;35:280-288]. © 2015 Australasian Professional Society on Alcohol and other Drugs.

  10. Adaptive Lasso for Poisson log-linear regression model%自适应Lasso在Poisson对数线性回归模型下的性质

    Institute of Scientific and Technical Information of China (English)

    崔静; 郭鹏江; 夏志明

    2011-01-01

    Aim To study adaptive Lasso for Poisson log-linear regrersion model. Methods The methods of mathematical analysis and probability theory are used. Results Under some conditions, the adaptive Lasso estimator for Poisson log-linear regression has the oracle properties which are sparsity and asymptotic normality. Conclusion A-daptive Lasso can effectively choose variables for Poisson log-linar regression model and estimate the variable coefficient.%目的 研究自适应Lasso在Poisson对数线性模型下的性质.方法 利用数学分析及概率论中的性质.结果 证明了在Poisson对数线性模型下自适应Lasso估计量具有稀疏性和渐进正态性.结论 自适应Lasso可以有效选择Poisson对数线性模型中的变量,并同时估计变量系数.

  11. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    Science.gov (United States)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  12. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  13. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...

  14. Inverse probability weighted Cox regression for doubly truncated data.

    Science.gov (United States)

    Mandel, Micha; de Uña-Álvarez, Jacobo; Simon, David K; Betensky, Rebecca A

    2017-09-08

    Doubly truncated data arise when event times are observed only if they fall within subject-specific, possibly random, intervals. While non-parametric methods for survivor function estimation using doubly truncated data have been intensively studied, only a few methods for fitting regression models have been suggested, and only for a limited number of covariates. In this article, we present a method to fit the Cox regression model to doubly truncated data with multiple discrete and continuous covariates, and describe how to implement it using existing software. The approach is used to study the association between candidate single nucleotide polymorphisms and age of onset of Parkinson's disease. © 2017, The International Biometric Society.

  15. A Comparison between the Use of Beta Weights and Structure Coefficients in Interpreting Regression Results

    Science.gov (United States)

    Tong, Fuhui

    2006-01-01

    Background: An extensive body of researches has favored the use of regression over other parametric analyses that are based on OVA. In case of noteworthy regression results, researchers tend to explore magnitude of beta weights for the respective predictors. Purpose: The purpose of this paper is to examine both beta weights and structure…

  16. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  17. The application of Dynamic Linear Bayesian Models in hydrological forecasting: Varying Coefficient Regression and Discount Weighted Regression

    Science.gov (United States)

    Ciupak, Maurycy; Ozga-Zielinski, Bogdan; Adamowski, Jan; Quilty, John; Khalil, Bahaa

    2015-11-01

    A novel implementation of Dynamic Linear Bayesian Models (DLBM), using either a Varying Coefficient Regression (VCR) or a Discount Weighted Regression (DWR) algorithm was used in the hydrological modeling of annual hydrographs as well as 1-, 2-, and 3-day lead time stream flow forecasting. Using hydrological data (daily discharge, rainfall, and mean, maximum and minimum air temperatures) from the Upper Narew River watershed in Poland, the forecasting performance of DLBM was compared to that of traditional multiple linear regression (MLR) and more recent artificial neural network (ANN) based models. Model performance was ranked DLBM-DWR > DLBM-VCR > MLR > ANN for both annual hydrograph modeling and 1-, 2-, and 3-day lead forecasting, indicating that the DWR and VCR algorithms, operating in a DLBM framework, represent promising new methods for both annual hydrograph modeling and short-term stream flow forecasting.

  18. Robust geographically weighted regression with least absolute deviation method in case of poverty in Java Island

    Science.gov (United States)

    Afifah, Rawyanil; Andriyana, Yudhie; Jaya, I. G. N. Mindra

    2017-03-01

    Geographically Weighted Regression (GWR) is a development of an Ordinary Least Squares (OLS) regression which is quite effective in estimating spatial non-stationary data. On the GWR models, regression parameters are generated locally, each observation has a unique regression coefficient. Parameter estimation process in GWR uses Weighted Least Squares (WLS). But when there are outliers in the data, the parameter estimation process with WLS produces estimators which are not efficient. Hence, this study uses a robust method called Least Absolute Deviation (LAD), to estimate the parameters of GWR model in the case of poverty in Java Island. This study concludes that GWR model with LAD method has a better performance.

  19. On the Relationship Between Confidence Sets and Exchangeable Weights in Multiple Linear Regression.

    Science.gov (United States)

    Pek, Jolynn; Chalmers, R Philip; Monette, Georges

    2016-01-01

    When statistical models are employed to provide a parsimonious description of empirical relationships, the extent to which strong conclusions can be drawn rests on quantifying the uncertainty in parameter estimates. In multiple linear regression (MLR), regression weights carry two kinds of uncertainty represented by confidence sets (CSs) and exchangeable weights (EWs). Confidence sets quantify uncertainty in estimation whereas the set of EWs quantify uncertainty in the substantive interpretation of regression weights. As CSs and EWs share certain commonalities, we clarify the relationship between these two kinds of uncertainty about regression weights. We introduce a general framework describing how CSs and the set of EWs for regression weights are estimated from the likelihood-based and Wald-type approach, and establish the analytical relationship between CSs and sets of EWs. With empirical examples on posttraumatic growth of caregivers (Cadell et al., 2014; Schneider, Steele, Cadell & Hemsworth, 2011) and on graduate grade point average (Kuncel, Hezlett & Ones, 2001), we illustrate the usefulness of CSs and EWs for drawing strong scientific conclusions. We discuss the importance of considering both CSs and EWs as part of the scientific process, and provide an Online Appendix with R code for estimating Wald-type CSs and EWs for k regression weights.

  20. Genetic parameters for various random regression models to describe the weight data of pigs

    NARCIS (Netherlands)

    Huisman, A.E.; Veerkamp, R.F.; Arendonk, van J.A.M.

    2002-01-01

    Various random regression models have been advocated for the fitting of covariance structures. It was suggested that a spline model would fit better to weight data than a random regression model that utilizes orthogonal polynomials. The objective of this study was to investigate which kind of random

  1. Genetic parameters for different random regression models to describe weight data of pigs

    NARCIS (Netherlands)

    Huisman, A.E.; Veerkamp, R.F.; Arendonk, van J.A.M.

    2001-01-01

    Various random regression models have been advocated for the fitting of covariance structures. It was suggested that a spline model would fit better to weight data than a random regression model that utilizes orthogonal polynomials. The objective of this study was to investigate which kind of random

  2. Poisson Autoregression

    DEFF Research Database (Denmark)

    Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag

    This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...

  3. Weight change in control group participants in behavioural weight loss interventions: a systematic review and meta-regression study

    Directory of Open Access Journals (Sweden)

    Waters Lauren

    2012-08-01

    Full Text Available Abstract Background Unanticipated control group improvements have been observed in intervention trials targeting various health behaviours. This phenomenon has not been studied in the context of behavioural weight loss intervention trials. The purpose of this study is to conduct a systematic review and meta-regression of behavioural weight loss interventions to quantify control group weight change, and relate the size of this effect to specific trial and sample characteristics. Methods Database searches identified reports of intervention trials meeting the inclusion criteria. Data on control group weight change and possible explanatory factors were abstracted and analysed descriptively and quantitatively. Results 85 trials were reviewed and 72 were included in the meta-regression. While there was no change in control group weight, control groups receiving usual care lost 1 kg more than control groups that received no intervention, beyond measurement. Conclusions There are several possible explanations why control group changes occur in intervention trials targeting other behaviours, but not for weight loss. Control group participation may prevent weight gain, although more research is needed to confirm this hypothesis.

  4. Non-crossing weighted kernel quantile regression with right censored data.

    Science.gov (United States)

    Bang, Sungwan; Eo, Soo-Heang; Cho, Yong Mee; Jhun, Myoungshic; Cho, HyungJun

    2016-01-01

    Regarding survival data analysis in regression modeling, multiple conditional quantiles are useful summary statistics to assess covariate effects on survival times. In this study, we consider an estimation problem of multiple nonlinear quantile functions with right censored survival data. To account for censoring in estimating a nonlinear quantile function, weighted kernel quantile regression (WKQR) has been developed by using the kernel trick and inverse-censoring-probability weights. However, the individually estimated quantile functions based on the WKQR often cross each other and consequently violate the basic properties of quantiles. To avoid this problem of quantile crossing, we propose the non-crossing weighted kernel quantile regression (NWKQR), which estimates multiple nonlinear conditional quantile functions simultaneously by enforcing the non-crossing constraints on kernel coefficients. The numerical results are presented to demonstrate the competitive performance of the proposed NWKQR over the WKQR.

  5. Intuitionistic Fuzzy Weighted Linear Regression Model with Fuzzy Entropy under Linear Restrictions.

    Science.gov (United States)

    Kumar, Gaurav; Bajaj, Rakesh Kumar

    2014-01-01

    In fuzzy set theory, it is well known that a triangular fuzzy number can be uniquely determined through its position and entropies. In the present communication, we extend this concept on triangular intuitionistic fuzzy number for its one-to-one correspondence with its position and entropies. Using the concept of fuzzy entropy the estimators of the intuitionistic fuzzy regression coefficients have been estimated in the unrestricted regression model. An intuitionistic fuzzy weighted linear regression (IFWLR) model with some restrictions in the form of prior information has been considered. Further, the estimators of regression coefficients have been obtained with the help of fuzzy entropy for the restricted/unrestricted IFWLR model by assigning some weights in the distance function.

  6. The quest for conditional independence in prospectivity modeling: weights-of-evidence, boost weights-of-evidence, and logistic regression

    Science.gov (United States)

    Schaeben, Helmut; Semmler, Georg

    2016-09-01

    The objective of prospectivity modeling is prediction of the conditional probability of the presence T = 1 or absence T = 0 of a target T given favorable or prohibitive predictors B, or construction of a two classes 0,1 classification of T. A special case of logistic regression called weights-of-evidence (WofE) is geologists' favorite method of prospectivity modeling due to its apparent simplicity. However, the numerical simplicity is deceiving as it is implied by the severe mathematical modeling assumption of joint conditional independence of all predictors given the target. General weights of evidence are explicitly introduced which are as simple to estimate as conventional weights, i.e., by counting, but do not require conditional independence. Complementary to the regression view is the classification view on prospectivity modeling. Boosting is the construction of a strong classifier from a set of weak classifiers. From the regression point of view it is closely related to logistic regression. Boost weights-of-evidence (BoostWofE) was introduced into prospectivity modeling to counterbalance violations of the assumption of conditional independence even though relaxation of modeling assumptions with respect to weak classifiers was not the (initial) purpose of boosting. In the original publication of BoostWofE a fabricated dataset was used to "validate" this approach. Using the same fabricated dataset it is shown that BoostWofE cannot generally compensate lacking conditional independence whatever the consecutively processing order of predictors. Thus the alleged features of BoostWofE are disproved by way of counterexamples, while theoretical findings are confirmed that logistic regression including interaction terms can exactly compensate violations of joint conditional independence if the predictors are indicators.

  7. Investigating the Performance of Alternate Regression Weights by Studying All Possible Criteria in Regression Models with a Fixed Set of Predictors

    Science.gov (United States)

    Waller, Niels; Jones, Jeff

    2011-01-01

    We describe methods for assessing all possible criteria (i.e., dependent variables) and subsets of criteria for regression models with a fixed set of predictors, x (where x is an n x 1 vector of independent variables). Our methods build upon the geometry of regression coefficients (hereafter called regression weights) in n-dimensional space. For a…

  8. Poisson Autoregression

    DEFF Research Database (Denmark)

    Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag

    This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... variance, implying an interpretation as an integer valued GARCH process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time...... series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of the asymptotic covariance, which is used in the simulations and the analysis of some...

  9. Poisson Autoregression

    DEFF Research Database (Denmark)

    Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag

    2009-01-01

    In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies...... to the conditional variance, making possible interpretation as an integer-valued generalized autoregressive conditional heteroscedasticity process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and past observations. As a particular example, we consider...... an exponential autoregressive Poisson model for time series. Under geometric ergodicity, the maximum likelihood estimators are shown to be asymptotically Gaussian in the linear model. In addition, we provide a consistent estimator of their asymptotic covariance matrix. Our approach to verifying geometric...

  10. Poisson Autoregression

    DEFF Research Database (Denmark)

    Fokianos, Konstantinos; Rahbæk, Anders; Tjøstheim, Dag

    This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... variance, making an interpretation as an integer valued GARCH process possible. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model...... for time series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of their asymptotic covariance matrix. Our approach to verifying geometric ergodicity...

  11. STATISTICAL INFERENCES FOR VARYING-COEFFICINT MODELS BASED ON LOCALLY WEIGHTED REGRESSION TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    梅长林; 张文修; 梁怡

    2001-01-01

    Some fundamental issues on statistical inferences relating to varying-coefficient regression models are addressed and studied. An exact testing procedure is proposed for checking the goodness of fit of a varying-coefficient model fired by the locally weighted regression technique versus an ordinary linear regression model. Also, an appropriate statistic for testing variation of model parameters over the locations where the observations are collected is constructed and a formal testing approach which is essential to exploring spatial non-stationarity in geography science is suggested.

  12. Poisson Coordinates.

    Science.gov (United States)

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  13. Enhancement of partial robust M-regression (PRM) performance using Bisquare weight function

    Science.gov (United States)

    Mohamad, Mazni; Ramli, Norazan Mohamed; Ghani@Mamat, Nor Azura Md; Ahmad, Sanizah

    2014-09-01

    Partial Least Squares (PLS) regression is a popular regression technique for handling multicollinearity in low and high dimensional data which fits a linear relationship between sets of explanatory and response variables. Several robust PLS methods are proposed to accommodate the classical PLS algorithms which are easily affected with the presence of outliers. The recent one was called partial robust M-regression (PRM). Unfortunately, the use of monotonous weighting function in the PRM algorithm fails to assign appropriate and proper weights to large outliers according to their severity. Thus, in this paper, a modified partial robust M-regression is introduced to enhance the performance of the original PRM. A re-descending weight function, known as Bisquare weight function is recommended to replace the fair function in the PRM. A simulation study is done to assess the performance of the modified PRM and its efficiency is also tested in both contaminated and uncontaminated simulated data under various percentages of outliers, sample sizes and number of predictors.

  14. Weighted linear regression using D2H and D2 as the independent variables

    Science.gov (United States)

    Hans T. Schreuder; Michael S. Williams

    1998-01-01

    Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...

  15. Product Design Time Forecasting by Kernel-Based Regression with Gaussian Distribution Weights

    Directory of Open Access Journals (Sweden)

    Zhi-Gen Shang

    2016-06-01

    Full Text Available There exist problems of small samples and heteroscedastic noise in design time forecasts. To solve them, a kernel-based regression with Gaussian distribution weights (GDW-KR is proposed here. GDW-KR maintains a Gaussian distribution over weight vectors for the regression. It is applied to seek the least informative distribution from those that keep the target value within the confidence interval of the forecast value. GDW-KR inherits the benefits of Gaussian margin machines. By assuming a Gaussian distribution over weight vectors, it could simultaneously offer a point forecast and its confidence interval, thus providing more information about product design time. Our experiments with real examples verify the effectiveness and flexibility of GDW-KR.

  16. Regression coefficient-based scoring system should be used to assign weights to the risk index.

    Science.gov (United States)

    Mehta, Hemalkumar B; Mehta, Vinay; Girman, Cynthia J; Adhikari, Deepak; Johnson, Michael L

    2016-11-01

    Some previously developed risk scores contained a mathematical error in their construction: risk ratios were added to derive weights to construct a summary risk score. This study demonstrates the mathematical error and derived different versions of the Charlson comorbidity score (CCS) using regression coefficient-based and risk ratio-based scoring systems to further demonstrate the effects of incorrect weighting on performance in predicting mortality. This retrospective cohort study included elderly people from the Clinical Practice Research Datalink. Cox proportional hazards regression models were constructed for time to 1-year mortality. Weights were assigned to 17 comorbidities using regression coefficient-based and risk ratio-based scoring systems. Different versions of CCS were compared using Akaike information criteria (AIC), McFadden's adjusted R(2), and net reclassification improvement (NRI). Regression coefficient-based models (Beta, Beta10/integer, Beta/Schneeweiss, Beta/Sullivan) had lower AIC and higher R(2) compared to risk ratio-based models (HR/Charlson, HR/Johnson). Regression coefficient-based CCS reclassified more number of people into the correct strata (NRI range, 9.02-10.04) compared to risk ratio-based CCS (NRI range, 8.14-8.22). Previously developed risk scores contained an error in their construction adding ratios instead of multiplying them. Furthermore, as demonstrated here, adding ratios fail to even work adequately from a practical standpoint. CCS derived using regression coefficients performed slightly better than in fitting the data compared to risk ratio-based scoring systems. Researchers should use a regression coefficient-based scoring system to develop a risk index, which is theoretically correct. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Scaling Flux Tower Observations of Sensible Heat Flux Using Weighted Area-to-Area Regression Kriging

    Directory of Open Access Journals (Sweden)

    Maogui Hu

    2015-07-01

    Full Text Available Sensible heat flux (H plays an important role in characterizations of land surface water and heat balance. There are various types of H measurement methods that depend on observation scale, from local-area-scale eddy covariance (EC to regional-scale large aperture scintillometer (LAS and remote sensing (RS products. However, methods of converting one H scale to another to validate RS products are still open for question. A previous area-to-area regression kriging-based scaling method performed well in converting EC-scale H to LAS-scale H. However, the method does not consider the path-weighting function in the EC- to LAS-scale kriging with the regression residue, which inevitably brought about a bias estimation. In this study, a weighted area-to-area regression kriging (WATA RK model is proposed to convert EC-scale H to LAS-scale H. It involves path-weighting functions of EC and LAS source areas in both regression and area kriging stages. Results show that WATA RK outperforms traditional methods in most cases, improving estimation accuracy. The method is considered to provide an efficient validation of RS H flux products.

  18. Real-time simultaneous myoelectric control by transradial amputees using linear and probability-weighted regression.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2015-08-01

    Regression-based prosthesis control using surface electromyography (EMG) has demonstrated real-time simultaneous control of multiple degrees of freedom (DOFs) in transradial amputees. However, these systems have been limited to control of wrist DOFs. Use of intramuscular EMG has shown promise for both wrist and hand control in able-bodied subjects, but to date has not been evaluated in amputee subjects. The objective of this study was to evaluate two regression-based simultaneous control methods using intramuscular EMG in transradial amputees and compare their performance to able-bodied subjects. Two transradial amputees and sixteen able-bodied subjects used fine wire EMG recorded from six forearm muscles to control three wrist/hand DOFs: wrist rotation, wrist flexion/extension, and hand open/close. Both linear regression and probability-weighted regression systems were evaluated in a virtual Fitts' Law test. Though both amputee subjects initially produced worse performance metrics than the able-bodied subjects, the amputee subject who completed multiple experimental blocks of the Fitts' law task demonstrated substantial learning. This subject's performance was within the range of able-bodied subjects by the end of the experiment. Both amputee subjects also showed improved performance when using probability-weighted regression for targets requiring use of only one DOF, and mirrored statistically significant differences observed with able-bodied subjects. These results indicate that amputee subjects may require more learning to achieve similar performance metrics as able-bodied subjects. These results also demonstrate that comparative findings between linear and probability-weighted regression with able-bodied subjects reflect performance differences when used by the amputee population.

  19. Radial basis function networks with linear interval regression weights for symbolic interval data.

    Science.gov (United States)

    Su, Shun-Feng; Chuang, Chen-Chia; Tao, C W; Jeng, Jin-Tsong; Hsiao, Chih-Ching

    2012-02-01

    This paper introduces a new structure of radial basis function networks (RBFNs) that can successfully model symbolic interval-valued data. In the proposed structure, to handle symbolic interval data, the Gaussian functions required in the RBFNs are modified to consider interval distance measure, and the synaptic weights of the RBFNs are replaced by linear interval regression weights. In the linear interval regression weights, the lower and upper bounds of the interval-valued data as well as the center and range of the interval-valued data are considered. In addition, in the proposed approach, two stages of learning mechanisms are proposed. In stage 1, an initial structure (i.e., the number of hidden nodes and the adjustable parameters of radial basis functions) of the proposed structure is obtained by the interval competitive agglomeration clustering algorithm. In stage 2, a gradient-descent kind of learning algorithm is applied to fine-tune the parameters of the radial basis function and the coefficients of the linear interval regression weights. Various experiments are conducted, and the average behavior of the root mean square error and the square of the correlation coefficient in the framework of a Monte Carlo experiment are considered as the performance index. The results clearly show the effectiveness of the proposed structure.

  20. Iterative Weighted Semiparametric Least Squares Estimation in Repeated Measurement Partially Linear Regression Models

    Institute of Scientific and Technical Information of China (English)

    Ge-mai Chen; Jin-hong You

    2005-01-01

    Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.

  1. The Effect of a Sports Stadium on Housing Rents: An Application of Geographically Weighted Regression

    Directory of Open Access Journals (Sweden)

    Jorge Enrique Agudelo Torres

    2015-06-01

    Full Text Available Researchers have determined that real estate prices vary in continuous ways as a function of spatial characteristics.  In this study we examine whether geographically weighted regression (GWR provides different estimates of price effects around a sports stadium than more traditional regression techniques.  We find that an application of GWR with hedonic prices finds that the stadium has a negative external effect on housing rents that extends outward 560 meters, in contrast to the positive external effect on housing rents found using a conventional estimation technique.

  2. [Interpolation of daily mean temperature by using geographically weighted regression-Kriging].

    Science.gov (United States)

    Zhang, Guo-feng; Yang, Li-rong; Qu, Ming-kai; Chen, Hui-lin

    2015-05-01

    Air temperature is the input variable of numerous models in agriculture, hydrology, climate, and ecology. Currently, in study areas where the terrain is complex, methods taking into account correlation between temperature and environment variables and autocorrelation of regression residual (e.g., regression Kriging, RK) are mainly adopted to interpolate the temperature. However, such methods are based on the global ordinary least squares (OLS) regression technique, without taking into account the spatial nonstationary relationship of environment variables. Geographically weighted regression-Kriging (GWRK) is a kind of method that takes into account spatial nonstationarity relationship of environment variables and spatial autocorrelation of regression residuals of environment variables. In this study, according to the results of correlation and stepwise regression analysis, RK1 (covariates only included altitude), GWRK1 (covariates only included altitude), RK2 (covariates included latitude, altitude and closest distance to the seaside) and GWRK2 (co-variates included altitude and closest distance to the seaside) were compared to predict the spatial distribution of mean daily air temperature on Hainan Island on December 18, 2013. The prediction accuracy was assessed using the maximum positive error, maximum negative error, mean absolute error and root mean squared error based on the 80 validation sites. The results showed that GWRK1's four assessment indices were all closest to 0. The fact that RK2 and GWRK2 were worse than RK1 and GWRK1 implied that correlation among covariates reduced model performance.

  3. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  4. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques

    OpenAIRE

    Kamil Faisal; Ahmed Shaker

    2017-01-01

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to ...

  5. Determination of glucose concentration from near-infrared spectra using locally weighted partial least square regression.

    Science.gov (United States)

    Malik, Bilal; Benaissa, Mohammed

    2012-01-01

    This paper proposes the use of locally weighted partial least square regression (LW-PLSR) as an alternative multivariate calibration method for the prediction of glucose concentration from NIR spectra. The efficiency of the proposed model is validated in experiments carried out in a non-controlled environment or sample conditions using mixtures composed of glucose, urea and triacetin. The collected data spans the spectral region from 2100 nm to 2400 nm with spectra resolution of 1 nm. The results show that the standard error of prediction (SEP) decreases to 23.85 mg/dL when using LW-PLSR in comparison to the SEP values of 49.40 mg/dL, and 27.56 mg/dL using Principal Component Regression (PCR) and Partial Least Square (PLS) regression respectively.

  6. A Machine Learning Tool for Weighted Regressions in Time, Discharge, and Season

    Directory of Open Access Journals (Sweden)

    Alexander Maestre

    2014-01-01

    Full Text Available A new machine learning tool has been developed to classify water stations with similar water quality trends. The tool is based on the statistical method, Weighted Regressions in Time, Discharge, and Season (WRTDS, developed by the United States Geological Survey (USGS to estimate daily concentrations of water constituents in rivers and streams based on continuous daily discharge data and discrete water quality samples collected at the same or nearby locations. WRTDS is based on parametric survival regressions using a jack-knife cross validation procedure that generates unbiased estimates of the prediction errors. One of the disadvantages of WRTDS is that it needs a large number of samples (n > 200 collected during at least two decades. In this article, the tool is used to evaluate the use of Boosted Regression Trees (BRT as an alternative to the parametric survival regressions for water quality stations with a small number of samples. We describe the development of the machine learning tool as well as an evaluation comparison of the two methods, WRTDS and BRT. The purpose of the tool is to evaluate the reduction in variability of the estimates by clustering data from nearby stations with similar concentration and discharge characteristics. The results indicate that, using clustering, the predicted concentrations using BRT are in general higher than the observed concentrations. In addition, it appears that BRT generates higher sum of square residuals than the parametric survival regressions.

  7. MAPPING OF ILLITERACY AND INFORMATION AND COMMUNICATION TECHNOLOGY INDICATORS USING GEOGRAPHICALLY WEIGHTED REGRESSION

    Directory of Open Access Journals (Sweden)

    Rokhana Dwi Bekti

    2014-01-01

    Full Text Available Geographically Weighted Regression (GWR is a technique that brings the framework of a simple regression model into a weighted regression model. Each parameter in this model is calculated at each point geographical location. The significantly parameter can be used for mapping. In this research GWR model use for mapping Information and Communication Technology (ICT indicators which influence on illiteracy. This problem was solved by estimation GWR model. The process was developing optimum bandwidth, weighted by kernel bisquare and parameter estimation. Mapping of ICT indicators was done by P-value. This research use data 29 regencies and 9 cities in East Java Province, Indonesia. GWR model compute the variables that significantly affect on illiteracy (α = 5% in some locations, such as percent households members with a mobile phone (x2, percent of household members who have computer (x3 and the percent of households who access the internet at school in the last month (x4. Ownership of mobile phone was significant (α = 5% at 20 locations. Ownership of computer and internet access were significant at 3 locations. Coefficient determination at all locations has R2 between 73.05-92.75%. The factors which affecting illiteracy in each location was very diverse. Mapping by P-value or critical area shows that ownership of mobile phone significantly affected at southern part of East Java. Then, the ownership of computer and internet access were significantly affected on illiteracy at northern area. All the coefficient regression in these locations was negative. It performs that if the number of mobile phone ownership, computer ownership and internet access were high then illiteracy will be decrease.

  8. The refinement of partial robust M-regression model using winsorized mean and Hampel weight function

    Science.gov (United States)

    Mohamad, Mazni; Mamat, Nor Azura Md Ghani @; Ramli, Norazan Mohamed; Ahmad, Sanizah

    2015-02-01

    Partial Robust M-Regression (PRM) is a robust Partial Least Squares (PLS) method using M-estimator, with multivariate L1 median and a monotonous weight function, known as Fair function in its algorithm. In many studies, the use of re-descending weight functions were much preferred to monotonous weight function due to the fact that the latter often failed to assign proper weights to outliers according to their severity. With the intention of improving the performance of PRM, this study suggested slight modifications to PRM by using winsorized mean and Hampel function, which comes from the family of re-descending weight functions. The proposed method was applied to a real high dimensional dataset which then modified to contain residual outliers as well as bad leverage points. The performance of PLS, PRM and modified PRM was assessed by means of their standard error of prediction (SEP) values. Compared to classical PLS and PRM, an improved performance was observed from the proposed method.

  9. Childhood emotional problems and self-perceptions predict weight gain in a longitudinal regression model

    Directory of Open Access Journals (Sweden)

    Collier David

    2009-09-01

    Full Text Available Abstract Background Obesity and weight gain are correlated with psychological ill health. We predicted that childhood emotional problems and self-perceptions predict weight gain into adulthood. Methods Data on around 6,500 individuals was taken from the 1970 Birth Cohort Study. This sample was a representative sample of individuals born in the UK in one week in 1970. Body mass index was measured by a trained nurse at the age of 10 years, and self-reported at age 30 years. Childhood emotional problems were indexed using the Rutter B scale and self-report. Self-esteem was measured using the LAWSEQ questionnaire, whilst the CARALOC scale was used to measure locus of control. Results Controlling for childhood body mass index, parental body mass index, and social class, childhood emotional problems as measured by the Rutter scale predicted weight gain in women only (least squares regression N = 3,359; coefficient 0.004; P = 0.032. Using the same methods, childhood self-esteem predicted weight gain in both men and women (N = 6,526; coefficient 0.023; P N = 6,522; coefficient 0.022; P Conclusion Emotional problems, low self-esteem and an external locus of control in childhood predict weight gain into adulthood. This has important clinical implications as it highlights a direction for early intervention strategies that may contribute to efforts to combat the current obesity epidemic.

  10. Childhood emotional problems and self-perceptions predict weight gain in a longitudinal regression model.

    Science.gov (United States)

    Ternouth, Andrew; Collier, David; Maughan, Barbara

    2009-09-11

    Obesity and weight gain are correlated with psychological ill health. We predicted that childhood emotional problems and self-perceptions predict weight gain into adulthood. Data on around 6,500 individuals was taken from the 1970 Birth Cohort Study. This sample was a representative sample of individuals born in the UK in one week in 1970. Body mass index was measured by a trained nurse at the age of 10 years, and self-reported at age 30 years. Childhood emotional problems were indexed using the Rutter B scale and self-report. Self-esteem was measured using the LAWSEQ questionnaire, whilst the CARALOC scale was used to measure locus of control. Controlling for childhood body mass index, parental body mass index, and social class, childhood emotional problems as measured by the Rutter scale predicted weight gain in women only (least squares regression N = 3,359; coefficient 0.004; P = 0.032). Using the same methods, childhood self-esteem predicted weight gain in both men and women (N = 6,526; coefficient 0.023; P self-esteem and an external locus of control in childhood predict weight gain into adulthood. This has important clinical implications as it highlights a direction for early intervention strategies that may contribute to efforts to combat the current obesity epidemic.

  11. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit

    2017-04-05

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  12. Implementation of Locally Weighted Projection Regression Network for Concurrency Control In Computer Aided Design

    Directory of Open Access Journals (Sweden)

    A.Muthukumaravel

    2011-08-01

    Full Text Available This paper presents implementation of locally weighted projection regression (LWPR network method for concurrency control while developing dial of a fork using Autodesk inventor 2008. The LWPR learns the objects and the type of transactions to be done based on which node in the output layer of the network exceeds a threshold value. Learning stops once all the objects are exposed to LWPR. During testing performance, metrics are analyzed. We have attempted to use LWPR for storing lock information when multi users are working on computer Aided Design (CAD. The memory requirements of the proposed method are minimal in processing locks during transaction.

  13. Relation between weight size and degree of over-fitting in neural network regression.

    Science.gov (United States)

    Hagiwara, Katsuyuki; Fukumizu, Kenji

    2008-01-01

    This paper investigates the relation between over-fitting and weight size in neural network regression. The over-fitting of a network to Gaussian noise is discussed. Using re-parametrization, a network function is represented as a bounded function g multiplied by a coefficient c. This is considered to bound the squared sum of the outputs of g at given inputs away from a positive constant delta(n), which restricts the weight size of a network and enables the probabilistic upper bound of the degree of over-fitting to be derived. This reveals that the order of the probabilistic upper bound can change depending on delta(n). By applying the bound to analyze the over-fitting behavior of one Gaussian unit, it is shown that the probability of obtaining an extremely small value for the width parameter in training is close to one when the sample size is large.

  14. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  15. High Adherence to Iron/Folic Acid Supplementation during Pregnancy Time among Antenatal and Postnatal Care Attendant Mothers in Governmental Health Centers in Akaki Kality Sub City, Addis Ababa, Ethiopia: Hierarchical Negative Binomial Poisson Regression

    Science.gov (United States)

    Gebreamlak, Bisratemariam; Dadi, Abel Fekadu; Atnafu, Azeb

    2017-01-01

    Background Iron deficiency during pregnancy is a risk factor for anemia, preterm delivery, and low birth weight. Iron/Folic Acid supplementation with optimal adherence can effectively prevent anemia in pregnancy. However, studies that address this area of adherence are very limited. Therefore, the current study was conducted to assess the adherence and to identify factors associated with a number of Iron/Folic Acid uptake during pregnancy time among mothers attending antenatal and postnatal care follow up in Akaki kality sub city. Methods Institutional based cross-sectional study was conducted on a sample of 557 pregnant women attending antenatal and postnatal care service. Systematic random sampling was used to select study subjects. The mothers were interviewed and the collected data was cleaned and entered into Epi Info 3.5.1 and analyzed by R version 3.2.0. Hierarchical Negative Binomial Poisson Regression Model was fitted to identify the factors associated with a number of Iron/Folic Acid uptake. Adjusted Incidence rate ratio (IRR) with 95% confidence interval (CI) was computed to assess the strength and significance of the association. Result More than 90% of the mothers were supplemented with at least one Iron/Folic Acid supplement from pill per week during their pregnancy time. Sixty percent of the mothers adhered (took four or more tablets per week) (95%CI, 56%—64.1%). Higher IRR of Iron/Folic Acid supplementation was observed among women: who received health education; which were privately employed; who achieved secondary education; and who believed that Iron/Folic Acid supplements increase blood, whereas mothers who reported a side effect, who were from families with relatively better monthly income, and who took the supplement when sick were more likely to adhere. Conclusion Adherence to Iron/Folic Acid supplement during their pregnancy time among mothers attending antenatal and postnatal care was found to be high. Activities that would address the

  16. Advantages of geographically weighted regression for modeling benthic substrate in two Greater Yellowstone Ecosystem streams

    Science.gov (United States)

    Sheehan, Kenneth R.; Strager, Michael P.; Welsh, Stuart

    2013-01-01

    Stream habitat assessments are commonplace in fish management, and often involve nonspatial analysis methods for quantifying or predicting habitat, such as ordinary least squares regression (OLS). Spatial relationships, however, often exist among stream habitat variables. For example, water depth, water velocity, and benthic substrate sizes within streams are often spatially correlated and may exhibit spatial nonstationarity or inconsistency in geographic space. Thus, analysis methods should address spatial relationships within habitat datasets. In this study, OLS and a recently developed method, geographically weighted regression (GWR), were used to model benthic substrate from water depth and water velocity data at two stream sites within the Greater Yellowstone Ecosystem. For data collection, each site was represented by a grid of 0.1 m2 cells, where actual values of water depth, water velocity, and benthic substrate class were measured for each cell. Accuracies of regressed substrate class data by OLS and GWR methods were calculated by comparing maps, parameter estimates, and determination coefficient r 2. For analysis of data from both sites, Akaike’s Information Criterion corrected for sample size indicated the best approximating model for the data resulted from GWR and not from OLS. Adjusted r 2 values also supported GWR as a better approach than OLS for prediction of substrate. This study supports GWR (a spatial analysis approach) over nonspatial OLS methods for prediction of habitat for stream habitat assessments.

  17. Application of Geographically Weighted Regression for Vulnerable Area Mapping of Leptospirosis in Bantul District

    Directory of Open Access Journals (Sweden)

    Prima Widayani

    2017-01-01

    Full Text Available Abstract Geographically Weighted Regression (GWR is regression model that developed for data modeling with continuous respond variable and considering the spatial or location aspect. Leptospirosis case happened in some regions in Indonesia, including in Bantul District, Special Region of Yogyakarta. The purpose of this study are to determine local and global variable in making vulnerable area model of Leptospirosis disease, determine the best type of weighting function and make vulnerable area map of Leptospirosis. Alos satelite imagery as primary data to get settlement and paddy fields area. The others variable are the percentage of population’s age, flood risk, and the number of health facility that obtained from secondary data. Determinant variables that affect locally are flood risk, health facility, percentage of age 25-50 years old and the percentage of settlement area. Meanwhile, independent variable that affects globally is the percentage of paddy fields area. Vulnerability map of Leptospirosis disease resulted from the best GWR model which used weighting function Fixed Bisquare. There are 3 vulnerable area of Leptospirosis disease, high vulnerability area located in the middle of Bantul District, meanwhile the medium and low vulnerability area showed clustered pattern in the side of Bantul District.   Abstrak Geographically Weighted Regression (GWR adalah model regresi yang dikembangkan untuk memodelkan data dengan variabel respon yang bersifat kontinu dan mempertimbangkan aspek spasial atau lokasi.  Kejadian Leptospirosis terjadi di beberapa wilayah di Indonesia termasuk di wilayah Kabupaten Bantul Daerah Istimewa Yogyakarta. Tujuan dari penelitian ini adalah menentukan variabel lokal dan global dalam membuat model  kerentanan Leptospirosis dan menentukan jenis fungsi pembobot yang terbaik serta membuat peta kerentanan wilayah Leptospirosis menggunakan aplikasi GWR. Citra Satelit Alos digunakan untuk mendapatkan data penggunaan

  18. Drop-Weight Impact Test on U-Shape Concrete Specimens with Statistical and Regression Analyses

    Directory of Open Access Journals (Sweden)

    Xue-Chao Zhu

    2015-09-01

    Full Text Available According to the principle and method of drop-weight impact test, the impact resistance of concrete was measured using self-designed U-shape specimens and a newly designed drop-weight impact test apparatus. A series of drop-weight impact tests were carried out with four different masses of drop hammers (0.875, 0.8, 0.675 and 0.5 kg. The test results show that the impact resistance results fail to follow a normal distribution. As expected, U-shaped specimens can predetermine the location of the cracks very well. It is also easy to record the cracks propagation during the test. The maximum of coefficient of variation in this study is 31.2%; it is lower than the values obtained from the American Concrete Institute (ACI impact tests in the literature. By regression analysis, the linear relationship between the first-crack and ultimate failure impact resistance is good. It can suggested that a minimum number of specimens is required to reliably measure the properties of the material based on the observed levels of variation.

  19. Error structure of enzyme kinetic experiments. Implications for weighting in regression analysis of experimental data.

    Science.gov (United States)

    Askelöf, P; Korsfeldt, M; Mannervik, B

    1976-10-01

    Knowledge of the error structure of a given set of experimental data is a necessary prerequisite for incisive analysis and for discrimination between alternative mathematical models of the data set. A reaction system consisting of glutathione S-transferase A (glutathione S-aryltransferase), glutathione, and 3,4-dichloro-1-nitrobenzene was investigated under steady-state conditions. It was found that the experimental error increased with initial velocity, v, and that the variance (estimated by replicates) could be described by a polynomial in v Var (v) = K0 + K1 - v + K2 - v2 or by a power function Var (v) = K0 + K1 - vK2. These equations were good approximations irrespective of whether different v values were generated by changing substrate or enzyme concentrations. The selection of these models was based mainly on experiments involving varying enzyme concentration, which, unlike v, is not considered a stochastic variable. Different models of the variance, expressed as functions of enzyme concentration, were examined by regression analysis, and the models could then be transformed to functions in which velocity is substituted for enzyme concentration owing to the proportionality between these variables. Thus, neither the absolute nor the relative error was independent of velocity, a result previously obtained for glutathione reductase in this laboratory [BioSystems 7, 101-119 (1975)]. If the experimental errors or velocities were standardized by division with their corresponding mean velocity value they showed a normal (Gaussian) distribution provided that the coefficient of variation was approximately constant for the data considered. Furthermore, it was established that the errors in the independent variables (enzyme and substrate concentrations) were small in comparison with the error in the velocity determinations. For weighting in regression analysis the inverted value of the local variance in each experimental point should be used. It was found that the

  20. Age determination of the nuclear stellar population of Active Galactic Nuclei using Locally Weighted Regression

    CERN Document Server

    Estrada-Piedra, T; Terlevich, R J; Fuentes, O; Terlevich, E; Estrada-Piedra, Trilce; Torres-Papaqui, Juan Pablo; Terlevich, Roberto; Fuentes, Olac; Terlevich, Elena

    2003-01-01

    We present a new technique to segregate old and young stellar populations in galactic spectra using machine learning methods. We used an ensemble of classifiers, each classifier in the ensemble specializes in young or old populations and was trained with locally weighted regression and tested using ten-fold cross-validation. Since the relevant information concentrates in certain regions of the spectra we used the method of sequential floating backward selection offline for feature selection. The application to Seyfert galaxies proved that this technique is very insensitive to the dilution by the Active Galactic Nucleus (AGN) continuum. Comparing with exhaustive search we concluded that both methods are similar in terms of accuracy but the machine learning method is faster by about two orders of magnitude.

  1. 微核试验数据的Poisson和负二项回归模型拟合效果比较%Comparison of Fitting Results of Poisson Regression and Negative Binomial Regression Models for Data of Cytokinesis-block Micronucleus Test

    Institute of Scientific and Technical Information of China (English)

    郑辉烈; 王增珍; 俞慧强

    2011-01-01

    Objective To compare the fitting results of the Poisson regression model and negative binomial regression model for data of cytokinesis-block micronucleus test, and to provide a basis for statistical analysis of data of cytokinesis-block micronucleus test. Methods By using the log likelihood function,the deviance,Pearson x2 and cluster index, the fitting results of Poisson regression model and the negative binomial regression model for data of cytokinesis-block micronucleus test were evaluated. Result The ratio of log lielihood function to degree of freedom for negative binomial regression was greater than that for Poisson regression. The ratio of deviance to degree of freedom and the ratio of Pearson x2 to degree of freedom for negative binomial regression were less than those for Poisson regression. There was a significant difference in cluster index that was not equal to zero for negative binomial regression model(x2= 1 160.42, P<0.001).Conclusion The negative binomial regression model was superior to Poisson regression model for data of cytokinesis-block micronucleus test.%目的 比较Poisson和负二项回归模型对微核试验数据(每1 000个双核淋巴细胞中具有微核的淋巴细胞数)的拟合效果,为微核试验数据的模型拟合提供依据.方法 运用微核试验数据,拟合Poisson分布和负二项分布回归模型,采用对数似然函数、偏差统计量、Pearson χ2统计量和聚集性指数等指标比较2种回归模型对实例数据的拟合效果.结果 负二项回归模型对数似然函数值与自由度的比值(-2.51)大于Poisson回归模型(-3.52);负二项回归模型拟合优度统计量-偏差统计量和Pearson χ2统计量与对应的自由度比值(1.16和1.07)小于Poisson回归模型;聚集性指数的似然比检验(H0:k=0)显示,聚集性指数不等于0具有统计学意义(χ2=1 160.42,P<0.001).结论对于微核试验数据,拟合负二项回归模型要优于Poisson回归模型.

  2. Geographically weighted regression as a generalized Wombling to detect barriers to gene flow.

    Science.gov (United States)

    Diniz-Filho, José Alexandre Felizola; Soares, Thannya Nascimento; de Campos Telles, Mariana Pires

    2016-08-01

    Barriers to gene flow play an important role in structuring populations, especially in human-modified landscapes, and several methods have been proposed to detect such barriers. However, most applications of these methods require a relative large number of individuals or populations distributed in space, connected by vertices from Delaunay or Gabriel networks. Here we show, using both simulated and empirical data, a new application of geographically weighted regression (GWR) to detect such barriers, modeling the genetic variation as a "local" linear function of geographic coordinates (latitude and longitude). In the GWR, standard regression statistics, such as R(2) and slopes, are estimated for each sampling unit and thus are mapped. Peaks in these local statistics are then expected close to the barriers if genetic discontinuities exist, capturing a higher rate of population differentiation among neighboring populations. Isolation-by-Distance simulations on a longitudinally warped lattice revealed that higher local slopes from GWR coincide with the barrier detected with Monmonier algorithm. Even with a relatively small effect of the barrier, the power of local GWR in detecting the east-west barriers was higher than 95 %. We also analyzed empirical data of genetic differentiation among tree populations of Dipteryx alata and Eugenia dysenterica Brazilian Cerrado. GWR was applied to the principal coordinate of the pairwise FST matrix based on microsatellite loci. In both simulated and empirical data, the GWR results were consistent with discontinuities detected by Monmonier algorithm, as well as with previous explanations for the spatial patterns of genetic differentiation for the two species. Our analyses reveal how this new application of GWR can viewed as a generalized Wombling in a continuous space and be a useful approach to detect barriers and discontinuities to gene flow.

  3. Testing the water-energy theory on American palms (Arecaceae) using geographically weighted regression.

    Science.gov (United States)

    Eiserhardt, Wolf L; Bjorholm, Stine; Svenning, Jens-Christian; Rangel, Thiago F; Balslev, Henrik

    2011-01-01

    Water and energy have emerged as the best contemporary environmental correlates of broad-scale species richness patterns. A corollary hypothesis of water-energy dynamics theory is that the influence of water decreases and the influence of energy increases with absolute latitude. We report the first use of geographically weighted regression for testing this hypothesis on a continuous species richness gradient that is entirely located within the tropics and subtropics. The dataset was divided into northern and southern hemispheric portions to test whether predictor shifts are more pronounced in the less oceanic northern hemisphere. American palms (Arecaceae, n = 547 spp.), whose species richness and distributions are known to respond strongly to water and energy, were used as a model group. The ability of water and energy to explain palm species richness was quantified locally at different spatial scales and regressed on latitude. Clear latitudinal trends in agreement with water-energy dynamics theory were found, but the results did not differ qualitatively between hemispheres. Strong inherent spatial autocorrelation in local modeling results and collinearity of water and energy variables were identified as important methodological challenges. We overcame these problems by using simultaneous autoregressive models and variation partitioning. Our results show that the ability of water and energy to explain species richness changes not only across large climatic gradients spanning tropical to temperate or arctic zones but also within megathermal climates, at least for strictly tropical taxa such as palms. This finding suggests that the predictor shifts are related to gradual latitudinal changes in ambient energy (related to solar flux input) rather than to abrupt transitions at specific latitudes, such as the occurrence of frost.

  4. Testing the water-energy theory on American palms (Arecaceae using geographically weighted regression.

    Directory of Open Access Journals (Sweden)

    Wolf L Eiserhardt

    Full Text Available Water and energy have emerged as the best contemporary environmental correlates of broad-scale species richness patterns. A corollary hypothesis of water-energy dynamics theory is that the influence of water decreases and the influence of energy increases with absolute latitude. We report the first use of geographically weighted regression for testing this hypothesis on a continuous species richness gradient that is entirely located within the tropics and subtropics. The dataset was divided into northern and southern hemispheric portions to test whether predictor shifts are more pronounced in the less oceanic northern hemisphere. American palms (Arecaceae, n = 547 spp., whose species richness and distributions are known to respond strongly to water and energy, were used as a model group. The ability of water and energy to explain palm species richness was quantified locally at different spatial scales and regressed on latitude. Clear latitudinal trends in agreement with water-energy dynamics theory were found, but the results did not differ qualitatively between hemispheres. Strong inherent spatial autocorrelation in local modeling results and collinearity of water and energy variables were identified as important methodological challenges. We overcame these problems by using simultaneous autoregressive models and variation partitioning. Our results show that the ability of water and energy to explain species richness changes not only across large climatic gradients spanning tropical to temperate or arctic zones but also within megathermal climates, at least for strictly tropical taxa such as palms. This finding suggests that the predictor shifts are related to gradual latitudinal changes in ambient energy (related to solar flux input rather than to abrupt transitions at specific latitudes, such as the occurrence of frost.

  5. Muscle oxygenation measurement in humans by noninvasive optical spectroscopy and Locally Weighted Regression.

    Science.gov (United States)

    Arakaki, Lorilee S L; Schenkman, Kenneth A; Ciesielski, Wayne A; Shaver, Jeremy M

    2013-06-27

    We have developed a method to make real-time, continuous, noninvasive measurements of muscle oxygenation (Mox) from the surface of the skin. A key development was measurement in both the visible and near infrared (NIR) regions. Measurement of both oxygenated and deoxygenated myoglobin and hemoglobin resulted in a more accurate measurement of Mox than could be achieved with measurement of only the deoxygenated components, as in traditional near-infrared spectroscopy (NIRS). Using the second derivative with respect to wavelength reduced the effects of scattering on the spectra and also made oxygenated and deoxygenated forms more distinguishable from each other. Selecting spectral bands where oxygenated and deoxygenated forms absorb filtered out noise and spectral features unrelated to Mox. NIR and visible bands were scaled relative to each other in order to correct for errors introduced by normalization. Multivariate Curve Resolution (MCR) was used to estimate Mox from spectra within each data set collected from healthy subjects. A Locally Weighted Regression (LWR) model was built from calibration set spectra and associated Mox values from 20 subjects using 2562 spectra. LWR and Partial Least Squares (PLS) allow accurate measurement of Mox despite variations in skin pigment or fat layer thickness in different subjects. The method estimated Mox in five healthy subjects with an RMSE of 5.4%.

  6. Fisher Scoring Method for Parameter Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia

    2017-06-01

    GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.

  7. Education-Based Gaps in eHealth: A Weighted Logistic Regression Approach.

    Science.gov (United States)

    Amo, Laura

    2016-10-12

    Persons with a college degree are more likely to engage in eHealth behaviors than persons without a college degree, compounding the health disadvantages of undereducated groups in the United States. However, the extent to which quality of recent eHealth experience reduces the education-based eHealth gap is unexplored. The goal of this study was to examine how eHealth information search experience moderates the relationship between college education and eHealth behaviors. Based on a nationally representative sample of adults who reported using the Internet to conduct the most recent health information search (n=1458), I evaluated eHealth search experience in relation to the likelihood of engaging in different eHealth behaviors. I examined whether Internet health information search experience reduces the eHealth behavior gaps among college-educated and noncollege-educated adults. Weighted logistic regression models were used to estimate the probability of different eHealth behaviors. College education was significantly positively related to the likelihood of 4 eHealth behaviors. In general, eHealth search experience was negatively associated with health care behaviors, health information-seeking behaviors, and user-generated or content sharing behaviors after accounting for other covariates. Whereas Internet health information search experience has narrowed the education gap in terms of likelihood of using email or Internet to communicate with a doctor or health care provider and likelihood of using a website to manage diet, weight, or health, it has widened the education gap in the instances of searching for health information for oneself, searching for health information for someone else, and downloading health information on a mobile device. The relationship between college education and eHealth behaviors is moderated by Internet health information search experience in different ways depending on the type of eHealth behavior. After controlling for college

  8. AN INVESTIGATION OF LOCAL EFFECTS ON SURFACE WARMING WITH GEOGRAPHICALLY WEIGHTED REGRESSION (GWR

    Directory of Open Access Journals (Sweden)

    Y. Xue

    2012-07-01

    Full Text Available Urban warming is sensitive to the nature (thermal properties, including albedo, water content, heat capacity and thermal conductivity and the placement (surface geometry or urban topography of urban surface. In the literature the spatial dependence and heterogeneity of urban thermal landscape is widely observed based on thermal infrared remote sensing within the urban environment. Urban surface warming is conceived as a big contribution to urban warming, the study of urban surface warming possesses significant meaning for probing into the problem of urban warming.The urban thermal landscape study takes advantage of the continuous surface derived from thermal infrared remote sensing at the landscape scale, the detailed variation of local surface temperature can be measured and analyzed through the systematic investigation. At the same time urban environmental factors can be quantified with remote sensing and GIS techniques. This enables a systematic investigation of urban thermal landscape with a link to be established between local environmental setting and surface temperature variation. The goal of this research is utilizing Geographically Weighted Regression (GWR to analyze the spatial relationship between urban form and surface temperature variation in order to clarify the local effects on surface warming, moreover to reveal the possible dynamics in the local influences of environmental indicators on the variation of local surface temperature across space and time. In this research, GWR analysis proved that the spatial variation in relationships between environmental setting and surface temperature was significant with Monte Carlo significance test and distinctive in day-night change. Comparatively, GWR facilitated the site specific investigation based on local statistical technique. The inference based on GWR model provided enriched information regarding the spatial variation of local environment effect on surface temperature variation which

  9. Large deviations for fractional Poisson processes

    CERN Document Server

    Beghin, Luisa

    2012-01-01

    We present large deviation results for two versions of fractional Poisson processes: the main version which is a renewal process, and the alternative version where all the random variables are weighted Poisson distributed. We also present a sample path large deviation result for suitably normalized counting processes; finally we show how this result can be applied to the two versions of fractional Poisson processes considered in this paper.

  10. Robust Principal Component Analysis and Geographically Weighted Regression: Urbanization in the Twin Cities Metropolitan Area of Minnesota

    OpenAIRE

    Ghosh, Debarchana; Manson, Steven M.

    2008-01-01

    In this paper, we present a hybrid approach, robust principal component geographically weighted regression (RPCGWR), in examining urbanization as a function of both extant urban land use and the effect of social and environmental factors in the Twin Cities Metropolitan Area (TCMA) of Minnesota. We used remotely sensed data to treat urbanization via the proxy of impervious surface. We then integrated two different methods, robust principal component analysis (RPCA) and geographically weighted ...

  11. PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON

    Directory of Open Access Journals (Sweden)

    PUTU SUSAN PRADAWATI

    2013-09-01

    Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.

  12. PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON

    Directory of Open Access Journals (Sweden)

    PUTU SUSAN PRADAWATI

    2013-09-01

    Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.

  13. Interpreting Regression Results: beta Weights and Structure Coefficients are Both Important.

    Science.gov (United States)

    Thompson, Bruce

    Various realizations have led to less frequent use of the "OVA" methods (analysis of variance--ANOVA--among others) and to more frequent use of general linear model approaches such as regression. However, too few researchers understand all the various coefficients produced in regression. This paper explains these coefficients and their…

  14. Modeling DNA affinity landscape through two-round support vector regression with weighted degree kernels

    KAUST Repository

    Wang, Xiaolei

    2014-12-12

    Background: A quantitative understanding of interactions between transcription factors (TFs) and their DNA binding sites is key to the rational design of gene regulatory networks. Recent advances in high-throughput technologies have enabled high-resolution measurements of protein-DNA binding affinity. Importantly, such experiments revealed the complex nature of TF-DNA interactions, whereby the effects of nucleotide changes on the binding affinity were observed to be context dependent. A systematic method to give high-quality estimates of such complex affinity landscapes is, thus, essential to the control of gene expression and the advance of synthetic biology. Results: Here, we propose a two-round prediction method that is based on support vector regression (SVR) with weighted degree (WD) kernels. In the first round, a WD kernel with shifts and mismatches is used with SVR to detect the importance of subsequences with different lengths at different positions. The subsequences identified as important in the first round are then fed into a second WD kernel to fit the experimentally measured affinities. To our knowledge, this is the first attempt to increase the accuracy of the affinity prediction by applying two rounds of string kernels and by identifying a small number of crucial k-mers. The proposed method was tested by predicting the binding affinity landscape of Gcn4p in Saccharomyces cerevisiae using datasets from HiTS-FLIP. Our method explicitly identified important subsequences and showed significant performance improvements when compared with other state-of-the-art methods. Based on the identified important subsequences, we discovered two surprisingly stable 10-mers and one sensitive 10-mer which were not reported before. Further test on four other TFs in S. cerevisiae demonstrated the generality of our method. Conclusion: We proposed in this paper a two-round method to quantitatively model the DNA binding affinity landscape. Since the ability to modify

  15. A geographically weighted regression model for geothermal potential assessment in mediterranean cultural landscape

    Science.gov (United States)

    D'Arpa, S.; Zaccarelli, N.; Bruno, D. E.; Leucci, G.; Uricchio, V. F.; Zurlini, G.

    2012-04-01

    Geothermal heat can be used directly in many applications (agro-industrial processes, sanitary hot water production, heating/cooling systems, etc.). These applications respond to energetic and environmental sustainability criteria, ensuring substantial energy savings with low environmental impacts. In particular, in Mediterranean cultural landscapes the exploitation of geothermal energy offers a valuable alternative compared to other exploitation systems more land-consuming and visual-impact. However, low enthalpy geothermal energy applications at regional scale, require careful design and planning to fully exploit benefits and reduce drawbacks. We propose a first example of application of a Geographically Weighted Regression (GWR) for the modeling of geothermal potential in the Apulia Region (South Italy) by integrating hydrological (e.g. depth to water table, water speed and temperature), geological-geotechnical (e.g. lithology, thermal conductivity) parameters and land-use indicators. The GWR model can effectively cope with data quality, spatial anisotropy, lack of stationarity and presence of discontinuities in the underlying data maps. The geothermal potential assessment required a good knowledge of the space-time variation of the numerous parameters related to the status of geothermal resource, a contextual analysis of spatial and environmental features, as well as the presence and nature of regulations or infrastructures constraints. We create an ad hoc geodatabase within ArcGIS 10 collecting relevant data and performing a quality assessment. Cross-validation shows high level of consistency of the spatial local models, as well as error maps can depict areas of lower reliability. Based on low enthalpy geothermal potential map created, a first zoning of the study area is proposed, considering four level of possible exploitation. Such zoning is linked and refined by the actual legal constraints acting at regional or province level as enforced by the regional

  16. Improving EWMA Plans for Detecting Unusual Increases in Poisson Counts

    Directory of Open Access Journals (Sweden)

    R. S. Sparks

    2009-01-01

    adaptive exponentially weighted moving average (EWMA plan is developed for signalling unusually high incidence when monitoring a time series of nonhomogeneous daily disease counts. A Poisson transitional regression model is used to fit background/expected trend in counts and provides “one-day-ahead” forecasts of the next day's count. Departures of counts from their forecasts are monitored. The paper outlines an approach for improving early outbreak data signals by dynamically adjusting the exponential weights to be efficient at signalling local persistent high side changes. We emphasise outbreak signals in steady-state situations; that is, changes that occur after the EWMA statistic had run through several in-control counts.

  17. Predicting Hospital Admissions With Poisson Regression Analysis

    Science.gov (United States)

    2009-06-01

    East and Four West. Four East is where bariatric , general, neurologic, otolaryngology (ENT), ophthalmologic, orthopedic, and plastic surgery ...where care is provided for cardiovascular, thoracic, and vascular surgery patients. Figure 1 shows a bar graph for each unit, giving the proportion of... indicating if the individual is an active duty family member, a binary variable indicating if the individual is retired, or a binary variable indicating if

  18. SU-F-BRD-01: A Logistic Regression Model to Predict Objective Function Weights in Prostate Cancer IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Boutilier, J; Chan, T; Lee, T [University of Toronto, Toronto, Ontario (Canada); Craig, T; Sharpe, M [University of Toronto, Toronto, Ontario (Canada); The Princess Margaret Cancer Centre - UHN, Toronto, ON (Canada)

    2014-06-15

    Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the left femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.

  19. Central simple Poisson algebras

    Institute of Scientific and Technical Information of China (English)

    SU Yucai; XU Xiaoping

    2004-01-01

    Poisson algebras are fundamental algebraic structures in physics and symplectic geometry. However, the structure theory of Poisson algebras has not been well developed. In this paper, we determine the structure of the central simple Poisson algebras related to locally finite derivations, over an algebraically closed field of characteristic zero.The Lie algebra structures of these Poisson algebras are in general not finitely-graded.

  20. Estimating the Impact of the PROMISE Scholarship Using Propensity Score Weighted Frontier Fuzzy Regression Discontinuity Design

    Science.gov (United States)

    Shobo, Yetty; Wong, Jen D.; Bell, Angie

    2014-01-01

    Regression discontinuity (RD), an "as good as randomized," research design is increasingly prominent in education research in recent years; the design gets eligible quasi-experimental designs as close as possible to experimental designs by using a stated threshold on a continuous baseline variable to assign individuals to a…

  1. Robust Principal Component Analysis and Geographically Weighted Regression: Urbanization in the Twin Cities Metropolitan Area of Minnesota.

    Science.gov (United States)

    Ghosh, Debarchana; Manson, Steven M

    2008-01-01

    In this paper, we present a hybrid approach, robust principal component geographically weighted regression (RPCGWR), in examining urbanization as a function of both extant urban land use and the effect of social and environmental factors in the Twin Cities Metropolitan Area (TCMA) of Minnesota. We used remotely sensed data to treat urbanization via the proxy of impervious surface. We then integrated two different methods, robust principal component analysis (RPCA) and geographically weighted regression (GWR) to create an innovative approach to model urbanization. The RPCGWR results show significant spatial heterogeneity in the relationships between proportion of impervious surface and the explanatory factors in the TCMA. We link this heterogeneity to the "sprawling" nature of urban land use that has moved outward from the core Twin Cities through to their suburbs and exurbs.

  2. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the cov......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...

  3. Childhood emotional problems and self-perceptions predict weight gain in a longitudinal regression model

    OpenAIRE

    Collier David; Ternouth Andrew; Maughan Barbara

    2009-01-01

    Abstract Background Obesity and weight gain are correlated with psychological ill health. We predicted that childhood emotional problems and self-perceptions predict weight gain into adulthood. Methods Data on around 6,500 individuals was taken from the 1970 Birth Cohort Study. This sample was a representative sample of individuals born in the UK in one week in 1970. Body mass index was measured by a trained nurse at the age of 10 years, and self-reported at age 30 years. Childhood emotional ...

  4. Poisson Morphisms and Reduced Affine Poisson Group Actions

    Institute of Scientific and Technical Information of China (English)

    YANG Qi Lin

    2002-01-01

    We establish the concept of a quotient affine Poisson group, and study the reduced Poisson action of the quotient of an affine Poisson group G on the quotient of an affine Poisson G-variety V. The Poisson morphisms (including equivariant cases) between Poisson affine varieties are also discussed.

  5. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    Science.gov (United States)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  6. Mixed geographically weighted regression (MGWR) model with weighted adaptive bi-square for case of dengue hemorrhagic fever (DHF) in Surakarta

    Science.gov (United States)

    Astuti, H. N.; Saputro, D. R. S.; Susanti, Y.

    2017-06-01

    MGWR model is combination of linear regression model and geographically weighted regression (GWR) model, therefore, MGWR model could produce parameter estimation that had global parameter estimation, and other parameter that had local parameter in accordance with its observation location. The linkage between locations of the observations expressed in specific weighting that is adaptive bi-square. In this research, we applied MGWR model with weighted adaptive bi-square for case of DHF in Surakarta based on 10 factors (variables) that is supposed to influence the number of people with DHF. The observation unit in the research is 51 urban villages and the variables are number of inhabitants, number of houses, house index, many public places, number of healthy homes, number of Posyandu, area width, level population density, welfare of the family, and high-region. Based on this research, we obtained 51 MGWR models. The MGWR model were divided into 4 groups with significant variable is house index as a global variable, an area width as a local variable and the remaining variables vary in each. Global variables are variables that significantly affect all locations, while local variables are variables that significantly affect a specific location.

  7. Pac-bayesian bounds for sparse regression estimation with exponential weights

    CERN Document Server

    Alquier, Pierre

    2010-01-01

    We consider the sparse regression model where the number of parameters $p$ is larger than the sample size $n$. The difficulty when considering high-dimensional problems is to propose estimators achieving a good compromise between statistical and computational performances. The BIC estimator for instance performs well from the statistical point of view \\cite{BTW07} but can only be computed for values of $p$ of at most a few tens. The Lasso estimator is solution of a convex minimization problem, hence computable for large value of $p$. However stringent conditions on the design are required to establish fast rates of convergence for this estimator. Dalalyan and Tsybakov \\cite{arnak} propose a method achieving a good compromise between the statistical and computational aspects of the problem. Their estimator can be computed for reasonably large $p$ and satisfies nice statistical properties under weak assumptions on the design. However, \\cite{arnak} proposes sparsity oracle inequalities in expectation for the emp...

  8. Feed efficiency and body weight growth throughout growing-furring period in mink using random regression method

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik

    2014-01-01

    The aim of this study was to determine genetic background of longitudinal residual feed intake (RFI) and body weight (BW) growth in farmed mink using random regression methods considering heterogeneous residual variances. Eight BW measures for each mink was recorded every three weeks from 63 to 210...... days of age for 2139 male mink and the same number of females. Cumulative feed intake was calculated six times with three weeks interval based on daily feed consumption between weighing’s from 105 to 210 days of age. Heritability estimates for RFI increased by age from 0.18 (0.03, standard deviation...... be obtained by only considering RFI estimate and BW at pelting, however, lower genetic correlations than unity indicate that extra genetic gain can be obtained by including estimates of these traits at the growing period. This study suggests random regression methods are suitable for analysing feed efficiency...

  9. Sparse Poisson noisy image deblurring.

    Science.gov (United States)

    Carlavan, Mikael; Blanc-Féraud, Laure

    2012-04-01

    Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

  10. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    Science.gov (United States)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  11. Extended Poisson Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Anum Fatima

    2015-09-01

    Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.

  12. Geographically weighted regression for modelling the accessibility to the public hospital network in Concepción Metropolitan Area, Chile

    Directory of Open Access Journals (Sweden)

    Marcela Martínez Bascuñán

    2016-11-01

    Full Text Available Accessibility models in transport geography based on geographic information systems have proven to be an effective method in determining spatial inequalities associated with public health. This work aims to model the spatial accessibility from populated areas within the Concepción metropolitan area (CMA, the second largest city in Chile. The city’s public hospital network is taken into consideration with special reference to socio-regional inequalities. The use of geographically weighted regression (GWR and ordinary least squares (OLS for modelling accessibility with socioeconomic and transport variables is proposed. The explanatory variables investigated are: illiterate population, rural housing, alternative housing, homes with a motorised vehicle, public transport routes, and connectivity. Our results identify that approximately 4.1% of the population have unfavourable or very unfavourable accessibility to public hospitals, which correspond to rural areas located south of CMA. Application of a local GWR model (0.87 R2 adjusted helped to improve the settings over the use of traditional OLS methods (multiple regression (0.67 R2 adjusted and to find the spatial distribution of both coefficients of the explanatory variables, demonstrating the local significance of the model. Thus, accessibility studies have enormous potential to contribute to the development of public health and transport policies in turn to achieve equality in spatial accessibility to specialised health care.

  13. The Local Food Environment and Fruit and Vegetable Intake: A Geographically Weighted Regression Approach in the ORiEL Study.

    Science.gov (United States)

    Clary, Christelle; Lewis, Daniel J; Flint, Ellen; Smith, Neil R; Kestens, Yan; Cummins, Steven

    2016-12-01

    Studies that explore associations between the local food environment and diet routinely use global regression models, which assume that relationships are invariant across space, yet such stationarity assumptions have been little tested. We used global and geographically weighted regression models to explore associations between the residential food environment and fruit and vegetable intake. Analyses were performed in 4 boroughs of London, United Kingdom, using data collected between April 2012 and July 2012 from 969 adults in the Olympic Regeneration in East London Study. Exposures were assessed both as absolute densities of healthy and unhealthy outlets, taken separately, and as a relative measure (proportion of total outlets classified as healthy). Overall, local models performed better than global models (lower Akaike information criterion). Locally estimated coefficients varied across space, regardless of the type of exposure measure, although changes of sign were observed only when absolute measures were used. Despite findings from global models showing significant associations between the relative measure and fruit and vegetable intake (β = 0.022; P environment and diet. It further challenges the idea that a single measure of exposure, whether relative or absolute, can reflect the many ways the food environment may shape health behaviors. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Impacts of land use and population density on seasonal surface water quality using a modified geographically weighted regression.

    Science.gov (United States)

    Chen, Qiang; Mei, Kun; Dahlgren, Randy A; Wang, Ting; Gong, Jian; Zhang, Minghua

    2016-12-01

    As an important regulator of pollutants in overland flow and interflow, land use has become an essential research component for determining the relationships between surface water quality and pollution sources. This study investigated the use of ordinary least squares (OLS) and geographically weighted regression (GWR) models to identify the impact of land use and population density on surface water quality in the Wen-Rui Tang River watershed of eastern China. A manual variable excluding-selecting method was explored to resolve multicollinearity issues. Standard regression coefficient analysis coupled with cluster analysis was introduced to determine which variable had the greatest influence on water quality. Results showed that: (1) Impact of land use on water quality varied with spatial and seasonal scales. Both positive and negative effects for certain land-use indicators were found in different subcatchments. (2) Urban land was the dominant factor influencing N, P and chemical oxygen demand (COD) in highly urbanized regions, but the relationship was weak as the pollutants were mainly from point sources. Agricultural land was the primary factor influencing N and P in suburban and rural areas; the relationship was strong as the pollutants were mainly from agricultural surface runoff. Subcatchments located in suburban areas were identified with urban land as the primary influencing factor during the wet season while agricultural land was identified as a more prevalent influencing factor during the dry season. (3) Adjusted R(2) values in OLS models using the manual variable excluding-selecting method averaged 14.3% higher than using stepwise multiple linear regressions. However, the corresponding GWR models had adjusted R(2) ~59.2% higher than the optimal OLS models, confirming that GWR models demonstrated better prediction accuracy. Based on our findings, water resource protection policies should consider site-specific land-use conditions within each watershed to

  15. Jointly Poisson processes

    CERN Document Server

    Johnson, D H

    2009-01-01

    What constitutes jointly Poisson processes remains an unresolved issue. This report reviews the current state of the theory and indicates how the accepted but unproven model equals that resulting from the small time-interval limit of jointly Bernoulli processes. One intriguing consequence of these models is that jointly Poisson processes can only be positively correlated as measured by the correlation coefficient defined by cumulants of the probability generating functional.

  16. Weighted Poisson analyses à la Goodman using SAS-GENMOD. [Paper presented at 15th SAS European Users Group International SEUGI Conference, Madrid, 12-16 May 1997].

    NARCIS (Netherlands)

    Vogelesang, A.W.

    2010-01-01

    In traffic safety analysis, weight factors are often applied, to correct for road length, traffic volume, numbers of kilometers driven, etc. Also, many data are subdivided into classes (road type: freeway, 80 km/hr, two lanes, rural area, etc); combining factors often leads to a cross-classification

  17. An improved algorithm for calculating first-order reversal curve (FORC) distributions using locally-weighted regression smoothing

    Science.gov (United States)

    Harrison, R. J.; Feinberg, J. M.

    2007-12-01

    First-order reversal curves (FORCs) are a powerful method for characterizing the magnetic hysteresis properties of natural and synthetic materials, and are rapidly becoming a standard tool in rock magnetic and paleomagnetic investigations. Here we describe a modification to existing algorithms for the calculation of FORC diagrams using locally-weighted regression smoothing (often referred to as loess smoothing). Like conventional algorithms, the FORC distribution is calculated by fitting a second degree polynomial to a region of FORC space defined by a smoothing factor, N. Our method differs from conventional algorithms in two ways. Firstly, rather than a square of side (2N+1) centered on the point of interest, the region of FORC space used for fitting is defined as a span of arbitrary shape encompassing the (2N+1)2 data points closest to the point of interest. Secondly, data inside the span are given a weight that depends on their distance from the point being evaluated: data closer to the point being evaluated have higher weights and have a greater effect on the fit. Loess smoothing offers two advantages over current methods. Firstly, it allows the FORC distribution to be calculated using a constant smoothing factor all the way to the Hc = 0 axis. This eliminates possible distortions to the FORC distribution associated with reducing the smoothing factor close to the Hc = 0 axis, and does not require use of the extended FORC formalism and the reversible ridge, which swamps the low-coercivity signal. Secondly, it allows finer control over the degree of smoothing applied to the data, enabling automated selection of the optimum smoothing factor for a given FORC measurement, based on an analysis of the standard deviation of the fit residuals. The new algorithm forms the basis for FORCinel, a new suite of FORC analysis tools for Igor Pro (www.wavemetrics.com), freely available on request from the authors.

  18. Scale Effects of the Relationships between Urban Heat Islands and Impact Factors Based on a Geographically-Weighted Regression Model

    Directory of Open Access Journals (Sweden)

    Xiaobo Luo

    2016-09-01

    Full Text Available Urban heat island (UHI effect, the side effect of rapid urbanization, has become an obstacle to the further healthy development of the city. Understanding its relationships with impact factors is important to provide useful information for climate adaptation urban planning strategies. For this purpose, the geographically-weighted regression (GWR approach is used to explore the scale effects in a mountainous city, namely the change laws and characteristics of the relationships between land surface temperature and impact factors at different spatial resolutions (30–960 m. The impact factors include the Soil-adjusted Vegetation Index (SAVI, the Index-based Built-up Index (IBI, and the Soil Brightness Index (NDSI, which indicate the coverage of the vegetation, built-up, and bare land, respectively. For reference, the ordinary least squares (OLS model, a global regression technique, is also employed by using the same dependent variable and explanatory variables as in the GWR model. Results from the experiment exemplified by Chongqing showed that the GWR approach had a better prediction accuracy and a better ability to describe spatial non-stationarity than the OLS approach judged by the analysis of the local coefficient of determination (R2, Corrected Akaike Information Criterion (AICc, and F-test at small spatial resolution (< 240 m; however, when the spatial scale was increased to 480 m, this advantage has become relatively weak. This indicates that the GWR model becomes increasingly global, revealing the relationships with more generalized geographical patterns, and then spatial non-stationarity in the relationship will tend to be neglected with the increase of spatial resolution.

  19. Food security and vulnerability modeling of East Java Province based on Geographically Weighted Ordinal Logistic Regression Semiparametric (GWOLRS model

    Directory of Open Access Journals (Sweden)

    N.W. Surya Wardhani

    2014-10-01

    Full Text Available Modeling of food security based on the characteristics of the area will be affected by the geographical location which means that geographical location will affect the region’s potential. Therefore, we need a method of statistical modeling that takes into account the geographical location or the location factor observations. In this case, the research variables could be global means that the location affects the response variables significantly; when some of the predictor variables are global and the other variables are local, then Geographically Weighted Ordinal Logistic Regression Semiparametric (GWOLRS could be used to analyze the data. The data used is the resilience and food insecurity data in 2011 in East Java Province. The result showed that three predictor variables that influenced by the location are the percentage of poor (%, rice production per district (tons and life expectancy (%. Those three predictor variables are local because they have significant influence in some districts/cities but had no significant effect in other districts/cities, while other two variables that are clean water and good quality road length (km are assumed global because it is not a significant factor for the whole districts/towns in East Java .

  20. Spatial Analysis of Severe Fever with Thrombocytopenia Syndrome Virus in China Using a Geographically Weighted Logistic Regression Model

    Directory of Open Access Journals (Sweden)

    Liang Wu

    2016-11-01

    Full Text Available Severe fever with thrombocytopenia syndrome (SFTS is caused by severe fever with thrombocytopenia syndrome virus (SFTSV, which has had a serious impact on public health in parts of Asia. There is no specific antiviral drug or vaccine for SFTSV and, therefore, it is important to determine the factors that influence the occurrence of SFTSV infections. This study aimed to explore the spatial associations between SFTSV infections and several potential determinants, and to predict the high-risk areas in mainland China. The analysis was carried out at the level of provinces in mainland China. The potential explanatory variables that were investigated consisted of meteorological factors (average temperature, average monthly precipitation and average relative humidity, the average proportion of rural population and the average proportion of primary industries over three years (2010–2012. We constructed a geographically weighted logistic regression (GWLR model in order to explore the associations between the selected variables and confirmed cases of SFTSV. The study showed that: (1 meteorological factors have a strong influence on the SFTSV cover; (2 a GWLR model is suitable for exploring SFTSV cover in mainland China; (3 our findings can be used for predicting high-risk areas and highlighting when meteorological factors pose a risk in order to aid in the implementation of public health strategies.

  1. Is the current pertussis incidence only the results of testing? A spatial and space-time analysis of pertussis surveillance data using cluster detection methods and geographically weighted regression modelling

    Science.gov (United States)

    Kauhl, Boris; Heil, Jeanne; Hoebe, Christian J. P. A.; Schweikart, Jürgen; Krafft, Thomas; Dukers-Muijrers, Nicole H. T. M.

    2017-01-01

    Background Despite high vaccination coverage, pertussis incidence in the Netherlands is amongst the highest in Europe with a shifting tendency towards adults and elderly. Early detection of outbreaks and preventive actions are necessary to prevent severe complications in infants. Efficient pertussis control requires additional background knowledge about the determinants of testing and possible determinants of the current pertussis incidence. Therefore, the aim of our study is to examine the possibility of locating possible pertussis outbreaks using space-time cluster detection and to examine the determinants of pertussis testing and incidence using geographically weighted regression models. Methods We analysed laboratory registry data including all geocoded pertussis tests in the southern area of the Netherlands between 2007 and 2013. Socio-demographic and infrastructure-related population data were matched to the geo-coded laboratory data. The spatial scan statistic was applied to detect spatial and space-time clusters of testing, incidence and test-positivity. Geographically weighted Poisson regression (GWPR) models were then constructed to model the associations between the age-specific rates of testing and incidence and possible population-based determinants. Results Space-time clusters for pertussis incidence overlapped with space-time clusters for testing, reflecting a strong relationship between testing and incidence, irrespective of the examined age group. Testing for pertussis itself was overall associated with lower socio-economic status, multi-person-households, proximity to primary school and availability of healthcare. The current incidence in contradiction is mainly determined by testing and is not associated with a lower socioeconomic status. Discussion Testing for pertussis follows to an extent the general healthcare seeking behaviour for common respiratory infections, whereas the current pertussis incidence is largely the result of testing. More

  2. Genetic parameters for body weight, hip height, and the ratio of weight to hip height from random regression analyses of Brahman feedlot cattle.

    Science.gov (United States)

    Riley, D G; Coleman, S W; Chase, C C; Olson, T A; Hammond, A C

    2007-01-01

    The objective of this research was to assess the genetic control of BW, hip height, and the ratio of BW to hip height (n = 5,055) in Brahman cattle through 170 d on feed using covariance function-random regression models. A progeny test of Brahman sires (n = 27) generated records of Brahman steers and heifers (n = 724) over 7 yr. Each year after weaning, calves were assigned to feedlot pens, where they were fed a high-concentrate grain diet. Body weights and hip heights were recorded every 28 d until cattle reached a targeted fatness level. All calves had records through 170 d on feed; subsequent records were excluded. Models included contemporary group (sex-pen-year combinations, n = 63) and age at the beginning of the feeding period as a covariate. The residual error structure was modeled as a random effect, with 2 levels corresponding to two 85-d periods on feed. Information criterion values indicated that linear, random regression coefficients on Legendre polynomials of days on feed were most appropriate to model additive genetic effects for all 3 traits. Cubic (hip height and BW:hip height ratio) or quartic (BW) polynomials best modeled permanent environmental effects. Estimates of heritability across the 170-d feeding period ranged from 0.31 to 0.53 for BW, from 0.37 to 0.53 for hip height, and from 0.23 to 0.6 for BW:hip height ratio. Estimates of the permanent environmental proportion of phenotypic variance ranged from 0.44 to 0.58 for BW, 0.07 to 0.26 for hip height, and 0.30 to 0.48 for BW:hip height ratio. Within-trait estimates of genetic correlation on pairs of days on feed (at 28-d intervals) indicated lower associations of BW:hip height ratio EBV early and late in the feeding period but large positive associations for BW or hip height EBV throughout. Estimates of genetic correlations among the 3 traits indicated almost no association of BW:hip height ratio and hip height EBV. The ratio of BW to hip height in cattle has previously been used as an

  3. Generalized Poisson sigma models

    CERN Document Server

    Batalin, I; Batalin, Igor; Marnelius, Robert

    2001-01-01

    A general master action in terms of superfields is given which generates generalized Poisson sigma models by means of a natural ghost number prescription. The simplest representation is the sigma model considered by Cattaneo and Felder. For Dirac brackets considerably more general models are generated.

  4. Cascaded Poisson processes

    Science.gov (United States)

    Matsuo, Kuniaki; Saleh, Bahaa E. A.; Teich, Malvin Carl

    1982-12-01

    We investigate the counting statistics for stationary and nonstationary cascaded Poisson processes. A simple equation is obtained for the variance-to-mean ratio in the limit of long counting times. Explicit expressions for the forward-recurrence and inter-event-time probability density functions are also obtained. The results are expected to be of use in a number of areas of physics.

  5. Poisson Random Variate Generation.

    Science.gov (United States)

    1981-12-01

    Poisson have been proposed. Atkinson [5] includes the approach developed in Marsaglia £15) and Norman and Cannon £16) which is based on composition...34, Naval Research Logistics Quarterly, 26, 3, 403-413. 15. Marsaglia , G. (1963). "Generating Discrete Random Variables in a Computer", Communications

  6. Formal equivalence of Poisson structures around Poisson submanifolds

    NARCIS (Netherlands)

    Marcut, I.T.

    2012-01-01

    Let (M,π) be a Poisson manifold. A Poisson submanifold P ⊂ M gives rise to a Lie algebroid AP → P. Formal deformations of π around P are controlled by certain cohomology groups associated to AP. Assuming that these groups vanish, we prove that π is formally rigid around P; that is, any other Poisson

  7. The use of a random regression model on the estimation of genetic parameters for weight at performance test in Appenninica sheep breed

    Directory of Open Access Journals (Sweden)

    Francesca M. Sarti

    2015-07-01

    Full Text Available The Appenninica breed is an Italian meat sheep; the rams are approved according to a phenotypic index that is based on an average daily gain at performance test. The 8546 live weights of 1930 Appenninica male lambs tested in the performance station of the ASSONAPA (National Sheep Breeders Association, Italy from 1986 to 2010 showed a great variability in age at weighing and in number of records by year. The goal of the study is to verify the feasibility of the estimation of a genetic index for weight in the Appenninica sheep by a mixed model, and to explore the use of random regression to avoid the corrections for weighing at different ages. The heritability and repeatability (mean±SE of the average live weight were 0.27±0.04 and 0.54±0.08 respectively; the heritabilities of weights recorded at different weighing days ranged from 0.27 to 0.58, while the heritabilities of weights at different ages showed a narrower variability (0.29÷0.41. The estimates of live weight heritability by random regressions ranged between 0.34 at 123 d of age and 0.52 at 411 d. The results proved that the random regression model is the most adequate to analyse the data of Appenninica breed.

  8. Paretian Poisson Processes

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  9. SYMPLECTIC STRUCTURE OF POISSON SYSTEM

    Institute of Scientific and Technical Information of China (English)

    SUN Jian-qiang; MA Zhong-qi; TIAN Yi-min; QIN Meng-zhao

    2005-01-01

    When the Poisson matrix of Poisson system is non-constant, classical symplectic methods, such as symplectic Runge-Kutta method, generating function method, cannot preserve the Poisson structure. The non-constant Poisson structure was transformed into the symplectic structure by the nonlinear transform.Arbitrary order symplectic method was applied to the transformed Poisson system. The Euler equation of the free rigid body problem was transformed into the symplectic structure and computed by the mid-point scheme. Numerical results show the effectiveness of the nonlinear transform.

  10. Using Regression to Establish Weights for a Set of Composite Equations through a Numerical Analysis Approach: A Case of Admission Criteria to a College

    Directory of Open Access Journals (Sweden)

    Ramzi N. Nasser

    2010-01-01

    Full Text Available Problem statement: Mathematically little is known of college admission criteria as in school grade point average, admission test scores or rank in class and weighting of the criteria into a composite equation. Approach: This study presented a method to obtain weights on “composite admission” equation. The method uses an iterative procedure to build a prediction equation for an optimal weighted admission composite score. The three-predictor variables, high school average, entrance exam scores and rank in class, were regressed on college Grade Point Average (GPA. The weights for the composite equation were determined through regression coefficients and numerical approach that correlate the composite score with college GPA. Results: A set of composite equations were determined with the weights on each criteria in a composite equation. Conclusion: This study detailed a substantiated algorithm and based on an optimal composite score, comes out with an original and unique structured composite score equation for admissions, which can be used by admission officers at colleges and universities.

  11. Poisson modules and degeneracy loci

    CERN Document Server

    Gualtieri, Marco

    2012-01-01

    In this paper, we study the interplay between modules and sub-objects in holomorphic Poisson geometry. In particular, we define a new notion of "residue" for a Poisson module, analogous to the Poincar\\'e residue of a meromorphic volume form. Of particular interest is the interaction between the residues of the canonical line bundle of a Poisson manifold and its degeneracy loci---where the rank of the Poisson structure drops. As an application, we provide new evidence in favour of Bondal's conjecture that the rank \\leq 2k locus of a Fano Poisson manifold always has dimension \\geq 2k+1. In particular, we show that the conjecture holds for Fano fourfolds. We also apply our techniques to a family of Poisson structures defined by Fe\\u{\\i}gin and Odesski\\u{\\i}, where the degeneracy loci are given by the secant varieties of elliptic normal curves.

  12. Fractal Poisson processes

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2008-09-01

    The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.

  13. Nonhomogeneous fractional Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Wang Xiaotian [School of Management, Tianjin University, Tianjin 300072 (China)]. E-mail: swa001@126.com; Zhang Shiying [School of Management, Tianjin University, Tianjin 300072 (China); Fan Shen [Computer and Information School, Zhejiang Wanli University, Ningbo 315100 (China)

    2007-01-15

    In this paper, we propose a class of non-Gaussian stationary increment processes, named nonhomogeneous fractional Poisson processes W{sub H}{sup (j)}(t), which permit the study of the effects of long-range dependance in a large number of fields including quantum physics and finance. The processes W{sub H}{sup (j)}(t) are self-similar in a wide sense, exhibit more fatter tail than Gaussian processes, and converge to the Gaussian processes in distribution in some cases. In addition, we also show that the intensity function {lambda}(t) strongly influences the existence of the highest finite moment of W{sub H}{sup (j)}(t) and the behaviour of the tail probability of W{sub H}{sup (j)}(t)

  14. The performance of functional methods for correcting non-Gaussian measurement error within Poisson regression: corrected excess risk of lung cancer mortality in relation to radon exposure among French uranium miners.

    Science.gov (United States)

    Allodji, Rodrigue S; Thiébaut, Anne C M; Leuraud, Klervi; Rage, Estelle; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques

    2012-12-30

    A broad variety of methods for measurement error (ME) correction have been developed, but these methods have rarely been applied possibly because their ability to correct ME is poorly understood. We carried out a simulation study to assess the performance of three error-correction methods: two variants of regression calibration (the substitution method and the estimation calibration method) and the simulation extrapolation (SIMEX) method. Features of the simulated cohorts were borrowed from the French Uranium Miners' Cohort in which exposure to radon had been documented from 1946 to 1999. In the absence of ME correction, we observed a severe attenuation of the true effect of radon exposure, with a negative relative bias of the order of 60% on the excess relative risk of lung cancer death. In the main scenario considered, that is, when ME characteristics previously determined as most plausible from the French Uranium Miners' Cohort were used both to generate exposure data and to correct for ME at the analysis stage, all three error-correction methods showed a noticeable but partial reduction of the attenuation bias, with a slight advantage for the SIMEX method. However, the performance of the three correction methods highly depended on the accurate determination of the characteristics of ME. In particular, we encountered severe overestimation in some scenarios with the SIMEX method, and we observed lack of correction with the three methods in some other scenarios. For illustration, we also applied and compared the proposed methods on the real data set from the French Uranium Miners' Cohort study.

  15. Geographically weighted regression for modelling the accessibility to the public hospital network in Concepción Metropolitan Area, Chile

    OpenAIRE

    Marcela Martínez Bascuñán; Carolina Rojas Quezada

    2016-01-01

    Accessibility models in transport geography based on geographic information systems have proven to be an effective method in determining spatial inequalities associated with public health. This work aims to model the spatial accessibility from populated areas within the Concepción metropolitan area (CMA), the second largest city in Chile. The city’s public hospital network is taken into consideration with special reference to socio-regional inequalities. The use of geographically weighted reg...

  16. Poisson Spot with Magnetic Levitation

    Science.gov (United States)

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-01-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  17. Poisson hierarchy of discrete strings

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidou, Theodora, E-mail: ti3@auth.gr [Faculty of Civil Engineering, School of Engineering, Aristotle University of Thessaloniki, 54249, Thessaloniki (Greece); Niemi, Antti J., E-mail: Antti.Niemi@physics.uu.se [Department of Physics and Astronomy, Uppsala University, P.O. Box 803, S-75108, Uppsala (Sweden); Laboratoire de Mathematiques et Physique Theorique CNRS UMR 6083, Fédération Denis Poisson, Université de Tours, Parc de Grandmont, F37200, Tours (France); Department of Physics, Beijing Institute of Technology, Haidian District, Beijing 100081 (China)

    2016-01-28

    The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.

  18. Simultaneous Measurement of Antenna Gain and Complex Permittivity of Liquid in Near-Field Region Using Weighted Regression

    Science.gov (United States)

    Ishii, Nozomu; Shiga, Hiroki; Ikarashi, Naoto; Sato, Ken-Ichi; Hamada, Lira; Watanabe, Soichi

    As a technique for calibrating electric-field probes used in standardized SAR (Specific Absorption Rate) assessment, we have studied the technique using the Friis transmission formula in the tissue-equivalent liquid. It is difficult to measure power transmission between two reference antennas in the far-field region due to large attenuation in the liquid. This means that the conventional Friis transmission formula cannot be applied to our measurement so that we developed an extension of this formula that is valid in the near-field region. In this paper, the method of weighted least squares is introduced to reduce the effect of the noise in the measurement system when the gain of the antenna operated in the liquid is determined by the curve-fitting technique. And we examine how to choose the fitting range to reduce the uncertainty of the estimated gain.

  19. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  20. Modules Over Color Hom-Poisson Algebras

    OpenAIRE

    2014-01-01

    In this paper we introduce color Hom-Poisson algebras and show that every color Hom-associative algebra has a non-commutative Hom-Poisson algebra structure in which the Hom-Poisson bracket is the commutator bracket. Then we show that color Poisson algebras (respectively morphism of color Poisson algebras) turn to color Hom-Poisson algebras (respectively morphism of Color Hom-Poisson algebras) by twisting the color Poisson structure. Next we prove that modules over color Hom–associative algebr...

  1. Current and Predicted Fertility using Poisson Regression Model ...

    African Journals Online (AJOL)

    AJRH Managing Editor

    Nigeria with persistent high growth rate is among top ten most populous countries. Monitoring key .... child and maternal health and overall family well- being1,2,22-26. .... large while the expectation remains stable, i.e., the probability of ...

  2. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  3. A hierarchy of Poisson brackets

    CERN Document Server

    Pavelka, Michal; Esen, Ogul; Grmela, Miroslav

    2015-01-01

    The vector field generating reversible time evolution of macroscopic systems involves two ingredients: gradient of a potential (a covector) and a degenerate Poisson structure transforming the covector into a vector. The Poisson structure is conveniently expressed in Poisson brackets, its degeneracy in their Casimirs (i.e. potentials whose gradients produce no vector field). In this paper we investigate in detail hierarchies of Poisson brackets, together with their Casimirs, that arise in passages from more to less detailed (i.e. more macroscopic) descriptions. In particular, we investigate the passage from mechanics of particles (in its Liouville representation) to the reversible kinetic theory and the passage from the reversible kinetic theory to the reversible fluid mechanics. From the physical point of view, the investigation includes binary mixtures and two-point formulations suitable for describing turbulent flows. From the mathematical point of view, we reveal the Lie algebra structure involved in the p...

  4. Gauging the Poisson sigma model

    CERN Document Server

    Zucchini, Roberto

    2008-01-01

    We show how to carry out the gauging of the Poisson sigma model in an AKSZ inspired formulation by coupling it to the a generalization of the Weil model worked out in ref. arXiv:0706.1289 [hep-th]. We call the resulting gauged field theory, Poisson--Weil sigma model. We study the BV cohomology of the model and show its relation to Hamiltonian basic and equivariant Poisson cohomology. As an application, we carry out the gauge fixing of the pure Weil model and of the Poisson--Weil model. In the first case, we obtain the 2--dimensional version of Donaldson--Witten topological gauge theory, describing the moduli space of flat connections on a closed surface. In the second case, we recover the gauged A topological sigma model worked out by Baptista describing the moduli space of solutions of the so--called vortex equations.

  5. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  6. Factors associated with adipocyte size reduction after weight loss interventions for overweight and obesity: a systematic review and meta-regression.

    Science.gov (United States)

    Murphy, Jessica; Moullec, Grégory; Santosa, Sylvia

    2017-02-01

    Enlarged adipocytes are a prime feature of adipose tissue dysfunction, and may be an appropriate target to decrease disease risk in obesity. We aimed to assess the change in adipocyte size in response to lifestyle and surgical weight loss interventions for overweight or obesity; and to explore whether certain participant and intervention characteristics influence this response. We systematically searched MEDLINE, EMBASE, CINAHL and Cochrane electronic databases to identify weight loss studies that quantified adipocyte size before and after the intervention. Using meta-regression analysis, we assessed the independent effects of weight loss, age, sex, adipocyte region, and intervention type (surgical vs. lifestyle) on adipocyte size reduction. We repeated the model as a sensitivity analysis including only the lifestyle interventions. Thirty-five studies met our eligibility criteria. In our main model, every 1.0% weight loss was associated with a 0.64% reduction in adipocyte size (p=0.003); and adipocytes from the upper body decreased 5% more in size than those in the lower body (p=0.009). These relationships were no longer significant when focusing only on lifestyle interventions. Moreover, age, sex and intervention type did not independently affect adipocyte size reduction in either model. Weight loss in obese individuals is consistently associated with a decrease in adipocyte size that is more pronounced in upper-body adipocytes. It remains to be clarified how biological differences and intervention characteristics influence this relationship, and whether it corresponds with reductions in other aspects of adipose tissue dysfunction and disease risk. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. An application of the Autoregressive Conditional Poisson (ACP) model

    CSIR Research Space (South Africa)

    Holloway, Jennifer P

    2010-11-01

    Full Text Available When modelling count data that comes in the form of a time series, the static Poisson regression and standard time series models are often not appropriate. A current study therefore involves the evaluation of several observation-driven and parameter...

  8. Both Sides Now: Interpreting Beta Weights. "Beta" Weights Should Be Used to Interpret Regression Variates and to Assess In-Context Variable Importance. Cautions and Conditions for Interpreting Weighting Coefficients.

    Science.gov (United States)

    Harris, Richard J.; McNeil, Keith

    1993-01-01

    Presents two viewpoints about the use and interpretability of beta weights in educational research: (1) that beta weights should be interpreted as a logical index of the importance of individual predictors within the context of the entire set of predictors; and (2) that interpretation requires certain cautions and conditions. (SV)

  9. Geographically weighted regression and geostatistical techniques to construct the geogenic radon potential map of the Lazio region: A methodological proposal for the European Atlas of Natural Radiation.

    Science.gov (United States)

    Ciotoli, G; Voltaggio, M; Tuccimei, P; Soligo, M; Pasculli, A; Beaubien, S E; Bigi, S

    2017-01-01

    In many countries, assessment programmes are carried out to identify areas where people may be exposed to high radon levels. These programmes often involve detailed mapping, followed by spatial interpolation and extrapolation of the results based on the correlation of indoor radon values with other parameters (e.g., lithology, permeability and airborne total gamma radiation) to optimise the radon hazard maps at the municipal and/or regional scale. In the present work, Geographical Weighted Regression and geostatistics are used to estimate the Geogenic Radon Potential (GRP) of the Lazio Region, assuming that the radon risk only depends on the geological and environmental characteristics of the study area. A wide geodatabase has been organised including about 8000 samples of soil-gas radon, as well as other proxy variables, such as radium and uranium content of homogeneous geological units, rock permeability, and faults and topography often associated with radon production/migration in the shallow environment. All these data have been processed in a Geographic Information System (GIS) using geospatial analysis and geostatistics to produce base thematic maps in a 1000 m × 1000 m grid format. Global Ordinary Least Squared (OLS) regression and local Geographical Weighted Regression (GWR) have been applied and compared assuming that the relationships between radon activities and the environmental variables are not spatially stationary, but vary locally according to the GRP. The spatial regression model has been elaborated considering soil-gas radon concentrations as the response variable and developing proxy variables as predictors through the use of a training dataset. Then a validation procedure was used to predict soil-gas radon values using a test dataset. Finally, the predicted values were interpolated using the kriging algorithm to obtain the GRP map of the Lazio region. The map shows some high GRP areas corresponding to the volcanic terrains (central

  10. A Quasi-Poisson Approach on Modeling Accident Hazard Index for Urban Road Segments

    Directory of Open Access Journals (Sweden)

    Lu Ma

    2014-01-01

    Full Text Available In light of the recently emphasized studies on risk evaluation of crashes, accident counts under specific transportation facilities are adopted to reflect the chance of crash occurrence. The current study introduces more comprehensive measure with the supplement information of accidental harmfulness into the expression of accident risks which are also named Accident Hazard Index (AHI in the following context. Before the statistical analysis, datasets from various sources are integrated under a GIS platform and the corresponding procedures are presented as an illustrated example for similar analysis. Then, a quasi-Poisson regression model is suggested for analyses and the results show that the model is appropriate for dealing with overdispersed count data and several key explanatory variables were found to have significant impact on the estimation of AHI. In addition, the effect of weight on different severity levels of accidents is examined and the selection of the weight is also discussed.

  11. Time-Changed Poisson Processes

    CERN Document Server

    Kumar, A; Vellaisamy, P

    2011-01-01

    We consider time-changed Poisson processes, and derive the governing difference-differential equations (DDE) these processes. In particular, we consider the time-changed Poisson processes where the the time-change is inverse Gaussian, or its hitting time process, and discuss the governing DDE's. The stable subordinator, inverse stable subordinator and their iterated versions are also considered as time-changes. DDE's corresponding to probability mass functions of these time-changed processes are obtained. Finally, we obtain a new governing partial differential equation for the tempered stable subordinator of index $0<\\beta<1,$ when $\\beta $ is a rational number. We then use this result to obtain the governing DDE for the mass function of Poisson process time-changed by tempered stable subordinator. Our results extend and complement the results in Baeumer et al. \\cite{B-M-N} and Beghin et al. \\cite{BO-1} in several directions.

  12. Salting-out assisted liquid-liquid extraction and partial least squares regression to assay low molecular weight polycyclic aromatic hydrocarbons leached from soils and sediments

    Science.gov (United States)

    Bressan, Lucas P.; do Nascimento, Paulo Cícero; Schmidt, Marcella E. P.; Faccin, Henrique; de Machado, Leandro Carvalho; Bohrer, Denise

    2017-02-01

    A novel method was developed to determine low molecular weight polycyclic aromatic hydrocarbons in aqueous leachates from soils and sediments using a salting-out assisted liquid-liquid extraction, synchronous fluorescence spectrometry and a multivariate calibration technique. Several experimental parameters were controlled and the optimum conditions were: sodium carbonate as the salting-out agent at concentration of 2 mol L- 1, 3 mL of acetonitrile as extraction solvent, 6 mL of aqueous leachate, vortexing for 5 min and centrifuging at 4000 rpm for 5 min. The partial least squares calibration was optimized to the lowest values of root mean squared error and five latent variables were chosen for each of the targeted compounds. The regression coefficients for the true versus predicted concentrations were higher than 0.99. Figures of merit for the multivariate method were calculated, namely sensitivity, multivariate detection limit and multivariate quantification limit. The selectivity was also evaluated and other polycyclic aromatic hydrocarbons did not interfere in the analysis. Likewise, high performance liquid chromatography was used as a comparative methodology, and the regression analysis between the methods showed no statistical difference (t-test). The proposed methodology was applied to soils and sediments of a Brazilian river and the recoveries ranged from 74.3% to 105.8%. Overall, the proposed methodology was suitable for the targeted compounds, showing that the extraction method can be applied to spectrofluorometric analysis and that the multivariate calibration is also suitable for these compounds in leachates from real samples.

  13. [Spatial Distribution of Type 2 Diabetes Mellitus in Berlin: Application of a Geographically Weighted Regression Analysis to Identify Location-Specific Risk Groups].

    Science.gov (United States)

    Kauhl, Boris; Pieper, Jonas; Schweikart, Jürgen; Keste, Andrea; Moskwyn, Marita

    2017-02-16

    Understanding which population groups in which locations are at higher risk for type 2 diabetes mellitus (T2DM) allows efficient and cost-effective interventions targeting these risk-populations in great need in specific locations. The goal of this study was to analyze the spatial distribution of T2DM and to identify the location-specific, population-based risk factors using global and local spatial regression models. To display the spatial heterogeneity of T2DM, bivariate kernel density estimation was applied. An ordinary least squares regression model (OLS) was applied to identify population-based risk factors of T2DM. A geographically weighted regression model (GWR) was then constructed to analyze the spatially varying association between the identified risk factors and T2DM. T2DM is especially concentrated in the east and outskirts of Berlin. The OLS model identified proportions of persons aged 80 and older, persons without migration background, long-term unemployment, households with children and a negative association with single-parenting households as socio-demographic risk groups. The results of the GWR model point out important local variations of the strength of association between the identified risk factors and T2DM. The risk factors for T2DM depend largely on the socio-demographic composition of the neighborhoods in Berlin and highlight that a one-size-fits-all approach is not appropriate for the prevention of T2DM. Future prevention strategies should be tailored to target location-specific risk-groups. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Anisotropic Poisson Processes of Cylinders

    CERN Document Server

    Spiess, Malte

    2010-01-01

    Main characteristics of stationary anisotropic Poisson processes of cylinders (dilated k-dimensional flats) in d-dimensional Euclidean space are studied. Explicit formulae for the capacity functional, the covariance function, the contact distribution function, the volume fraction, and the intensity of the surface area measure are given which can be used directly in applications.

  15. Berezin integrals and Poisson processes

    Science.gov (United States)

    DeAngelis, G. F.; Jona-Lasinio, G.; Sidoravicius, V.

    1998-01-01

    We show that the calculation of Berezin integrals over anticommuting variables can be reduced to the evaluation of expectations of functionals of Poisson processes via an appropriate Feynman-Kac formula. In this way the tools of ordinary analysis can be applied to Berezin integrals and, as an example, we prove a simple upper bound. Possible applications of our results are briefly mentioned.

  16. Test of Poisson Failure Assumption.

    Science.gov (United States)

    1982-09-01

    o. ....... 37 00/ D itlr.: DVI r TEST OF POISSON FAILURE ASSUMPTION Chapter 1. INTRODUCTION 1.1 Background. In stockage models... precipitates a regular failure pattern; it is also possible that the coding of scheduled vs unscheduled does not reflect what we would expect. Data

  17. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    Science.gov (United States)

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  18. Assessment of Brown Bear\\'s (Ursus arctos syriacus Winter Habitat Using Geographically Weighted Regression and Generalized Linear Model in South of Iran

    Directory of Open Access Journals (Sweden)

    A. A. Zarei

    2016-03-01

    Full Text Available Winter dens are one of the important components of brown bear's (Ursus arctos syriacus habitat, affecting their reproduction and survival. Therefore identification of factors affecting the habitat selection and suitable denning areas in the conservation of our largest carnivore is necessary. We used Geographically Weighted Logistic Regression (GWLR and Generalized Linear Model (GLM for modeling suitability of denning habitat in Kouhkhom region in Fars province. In the present research, 20 dens (presence locations and 20 caves where signs of bear were not found (absence locations were used as dependent variables and six environmental factors were used for each location as independent variables. The results of GLM showed that variables of distance to settlements, altitude, and distance to water were the most important parameters affecting suitability of the brown bear's denning habitat. The results of GWLR showed the significant local variations in the relationship between occurrence of brown bear dens and the variable of distance to settlements. Based on the results of both models, suitable habitats for denning of the species are impassable areas in the mountains and inaccessible for humans.

  19. Real-Time Estimation of Satellite-Derived PM2.5 Based on a Semi-Physical Geographically Weighted Regression Model

    Science.gov (United States)

    Zhang, Tianhao; Liu, Gang; Zhu, Zhongmin; Gong, Wei; Ji, Yuxi; Huang, Yusi

    2016-01-01

    The real-time estimation of ambient particulate matter with diameter no greater than 2.5 μm (PM2.5) is currently quite limited in China. A semi-physical geographically weighted regression (GWR) model was adopted to estimate PM2.5 mass concentrations at national scale using the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) Aerosol Optical Depth product fused by the Dark Target (DT) and Deep Blue (DB) algorithms, combined with meteorological parameters. The fitting results could explain over 80% of the variability in the corresponding PM2.5 mass concentrations, and the estimation tends to overestimate when measurement is low and tends to underestimate when measurement is high. Based on World Health Organization standards, results indicate that most regions in China suffered severe PM2.5 pollution during winter. Seasonal average mass concentrations of PM2.5 predicted by the model indicate that residential regions, namely Jing-Jin-Ji Region and Central China, were faced with challenge from fine particles. Moreover, estimation deviation caused primarily by the spatially uneven distribution of monitoring sites and the changes of elevation in a relatively small region has been discussed. In summary, real-time PM2.5 was estimated effectively by the satellite-based semi-physical GWR model, and the results could provide reasonable references for assessing health impacts and offer guidance on air quality management in China. PMID:27706054

  20. Spatiotemporal Pattern of PM2.5 Concentrations in Mainland China and Analysis of Its Influencing Factors using Geographically Weighted Regression

    Science.gov (United States)

    Luo, Jieqiong; Du, Peijun; Samat, Alim; Xia, Junshi; Che, Meiqin; Xue, Zhaohui

    2017-01-01

    Based on annual average PM2.5 gridded dataset, this study first analyzed the spatiotemporal pattern of PM2.5 across Mainland China during 1998–2012. Then facilitated with meteorological site data, land cover data, population and Gross Domestic Product (GDP) data, etc., the contributions of latent geographic factors, including socioeconomic factors (e.g., road, agriculture, population, industry) and natural geographical factors (e.g., topography, climate, vegetation) to PM2.5 were explored through Geographically Weighted Regression (GWR) model. The results revealed that PM2.5 concentrations increased while the spatial pattern remained stable, and the proportion of areas with PM2.5 concentrations greater than 35 μg/m3 significantly increased from 23.08% to 29.89%. Moreover, road, agriculture, population and vegetation showed the most significant impacts on PM2.5. Additionally, the Moran’s I for the residuals of GWR was 0.025 (not significant at a 0.01 level), indicating that the GWR model was properly specified. The local coefficient estimates of GDP in some cities were negative, suggesting the existence of the inverted-U shaped Environmental Kuznets Curve (EKC) for PM2.5 in Mainland China. The effects of each latent factor on PM2.5 in various regions were different. Therefore, regional measures and strategies for controlling PM2.5 should be formulated in terms of the local impacts of specific factors.

  1. Exploring the Non-Stationary Effects of Forests and Developed Land within Watersheds on Biological Indicators of Streams Using Geographically-Weighted Regression

    Directory of Open Access Journals (Sweden)

    Kyoung-Jin An

    2016-03-01

    Full Text Available This study examined the non-stationary relationship between the ecological condition of streams and the proportions of forest and developed land in watersheds using geographically-weighted regression (GWR. Most previous studies have adopted the ordinary least squares (OLS method, which assumes stationarity of the relationship between land use and biological indicators. However, these conventional OLS models cannot provide any insight into local variations in the land use effects within watersheds. Here, we compared the performance of the OLS and GWR statistical models applied to benthic diatom, macroinvertebrate, and fish communities in sub-watershed management areas. We extracted land use datasets from the Ministry of Environment LULC map and data on biological indicators in Nakdong river systems from the National Aquatic Ecological Monitoring Program in Korea. We found that the GWR model had superior performance compared with the OLS model, as assessed based on R2, Akaike’s Information Criterion, and Moran’s I values. Furthermore, GWR models revealed specific localized effects of land use on biological indicators, which we investigated further. The results of this study can be used to inform more effective policies on watershed management and to enhance ecological integrity by prioritizing sub-watershed management areas

  2. Poisson vs. Long-Tailed Internet traffic

    OpenAIRE

    2005-01-01

    In this thesis, we reexamine the long discussion on which model is suitable for studying Internet traffic: Poisson or Long-tailed Internet traffic. Poisson model, adapted from telephone network, has been used since the beginning of World Wide Web, while long-tailed distribution gradually takes over with believable evidence. Instead of using Superposition of Point Processes to explain why traffic that is not Poisson tends towards Poisson traffic as the load increases, as it is recent...

  3. The Degraded Poisson Wiretap Channel

    CERN Document Server

    Laourine, Amine

    2010-01-01

    Providing security guarantees for wireless communication is critically important for today's applications. While previous work in this area has concentrated on radio frequency (RF) channels, providing security guarantees for RF channels is inherently difficult because they are prone to rapid variations due small scale fading. Wireless optical communication, on the other hand, is inherently more secure than RF communication due to the intrinsic aspects of the signal propagation in the optical and near-optical frequency range. In this paper, secure communication over wireless optical links is examined by studying the secrecy capacity of a direct detection system. For the degraded Poisson wiretap channel, a closed-form expression of the secrecy capacity is given. A complete characterization of the general rate-equivocation region is also presented. For achievability, an optimal code is explicitly constructed by using the structured code designed by Wyner for the Poisson channel. The converse is proved in two dif...

  4. Perturbation analysis of Poisson processes

    CERN Document Server

    Last, Günter

    2012-01-01

    We consider a Poisson process $\\Phi$ on a general phase space. The expectation of a function of $\\Phi$ can be considered as a functional of the intensity measure $\\lambda$ of $\\Phi$. Extending ealier results of Molchanov and Zuyev (2000) on finite Poisson processes, we study the behaviour of this functional under signed (possibly infinite) perturbations of $\\lambda$. In particular we obtain general Margulis--Russo type formulas for the derivative with respect to non-linear transformations of the intensity measure depending on some parameter. As an application we study the behaviour of expectations of functions of multivariate pure jump L\\'evy processes under perturbations of the L\\'evy measure. A key ingredient of our approach is the explicit Fock space representation obtained in Last and Penrose (2011).

  5. A generalized gyrokinetic Poisson solver

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Z.; Lee, W.W.

    1995-03-01

    A generalized gyrokinetic Poisson solver has been developed, which employs local operations in the configuration space to compute the polarization density response. The new technique is based on the actual physical process of gyrophase-averaging. It is useful for nonlocal simulations using general geometry equilibrium. Since it utilizes local operations rather than the global ones such as FFT, the new method is most amenable to massively parallel algorithms.

  6. Penggunaan Analisis Regresi Terboboti dalam Penyusunan Model Pertumbuhan Peninggi Acacia mangium Willd. (The Use of Weighted Regression Analysis for Constructing Top-height Growth Model of Acacia mangium Willd.

    Directory of Open Access Journals (Sweden)

    Muhdin .

    2011-05-01

    Full Text Available The compilation of growth stand model usually uses the regression analysis. Homoscedasticity or residual kind homogeneity is one assumption which underlying the use of this regression analysis.  Breaking this assumption causes the low of model accuracy which is shown by the low of determination coefficient and the height of error standard. The problem of heteroscedasticity can be solved by using weighted regression analysis.The Selected Raiser Growth Model equation in this research was transformed into a model equation: ln P = a + b/A, where there was a significant correlation between the growth and the age (R2  = 55.04%, sb0 = 0.041, and sb1 = 0.171.  From the use of weighted regression analysis with weightier wi = 1/”Xi, it can be concluded that there was no real correlation between the growth and the age (R2 = 0.55%, sb0 = 0.572, and sb1 = 2.560.  The use of weightier shows much lower accuracy than without weightier.  However, from the use of weighted regression analysis with weightier: wi = 1/si2, where  si2  = residual kinds at free variable group to I (X1 shows that there was significant correlation between the growth and the age (R2 = 45.46%;  sb0  = 0.084, and sb1 = 0.205.  There fore it can be said that the accuracy was much better than regression without weightier.  Furthermore,  the use of weighted regression analysis with weightier wi = 1/si2, where  si2 is residual kind at free variable to i (X which is estimated through second orde polynomial regression model shows a very significant correlation between the growth and the age (where R2 = 87.22%, sb0 = 0.029, and sb1  = 0.072. The last result shows a better accuracy than the preceding treatments.  From this research, it can be concluded that by using a suitable weightier, the use of weighted regression analysis in compiling raiser growth model can improve the model accuracy.  Keywords: growth model, weighted regression, acacia mangium,regression analysis

  7. Estimating national-scale ground-level PM25 concentration in China using geographically weighted regression based on MODIS and MISR AOD.

    Science.gov (United States)

    You, Wei; Zang, Zengliang; Zhang, Lifeng; Li, Yi; Wang, Weiqi

    2016-05-01

    Taking advantage of the continuous spatial coverage, satellite-derived aerosol optical depth (AOD) products have been widely used to assess the spatial and temporal characteristics of fine particulate matter (PM2.5) on the ground and their effects on human health. However, the national-scale ground-level PM2.5 estimation is still very limited because the lack of ground PM2.5 measurements to calibrate the model in China. In this study, a national-scale geographically weighted regression (GWR) model was developed to estimate ground-level PM2.5 concentration based on satellite AODs, newly released national-wide hourly PM2.5 concentrations, and meteorological parameters. The results showed good agreements between satellite-retrieved and ground-observed PM2.5 concentration at 943 stations in China. The overall cross-validation (CV) R (2) is 0.76 and root mean squared prediction error (RMSE) is 22.26 μg/m(3) for MODIS-derived AOD. The MISR-derived AOD also exhibits comparable performance with a CV R (2) and RMSE are 0.81 and 27.46 μg/m(3), respectively. Annual PM2.5 concentrations retrieved either by MODIS or MISR AOD indicated that most of the residential community areas exceeded the new annual Chinese PM2.5 National Standard level 2. These results suggest that this approach is useful for estimating large-scale ground-level PM2.5 distributions especially for the regions without PMs monitoring sites.

  8. Scrub typhus islands in the Taiwan area and the association between scrub typhus disease and forest land use and farmer population density: geographically weighted regression.

    Science.gov (United States)

    Tsai, Pui-Jen; Yeh, Hsi-Chyi

    2013-04-29

    The Taiwan area comprises the main island of Taiwan and several small islands located off the coast of the Southern China. The eastern two-thirds of Taiwan are characterized by rugged mountains covered with tropical and subtropical vegetation. The western region of Taiwan is characterized by flat or gently rolling plains. Geographically, the Taiwan area is diverse in ecology and environment, although scrub typhus threatens local human populations. In this study, we investigate the effects of seasonal and meteorological factors on the incidence of scrub typhus infection among 10 local climate regions. The correlation between the spatial distribution of scrub typhus and cultivated forests in Taiwan, as well as the relationship between scrub typhus incidence and the population density of farm workers is examined. We applied Pearson's product moment correlation to calculate the correlation between the incidence of scrub typhus and meteorological factors among 10 local climate regions. We used the geographically weighted regression (GWR) method, a type of spatial regression that generates parameters disaggregated by the spatial units of analysis, to detail and map each regression point for the response variables of the standardized incidence ratio (SIR)-district scrub typhus. We also applied the GWR to examine the explanatory variables of types of forest-land use and farm worker density in Taiwan in 2005. In the Taiwan Area, scrub typhus endemic areas are located in the southeastern regions and mountainous townships of Taiwan, as well as the Pescadore, Kinmen, and Matou Islands. Among these islands and low-incidence areas in the central western and southwestern regions of Taiwan, we observed a significant correlation between scrub typhus incidence and surface temperature. No similar significant correlation was found in the endemic areas (e.g., the southeastern region and the mountainous area of Taiwan). Precipitation correlates positively with scrub typhus incidence in

  9. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  10. Poisson Manifolds, Lie Algebroids, Modular Classes: a Survey

    Directory of Open Access Journals (Sweden)

    Yvette Kosmann-Schwarzbach

    2008-01-01

    Full Text Available After a brief summary of the main properties of Poisson manifolds and Lie algebroids in general, we survey recent work on the modular classes of Poisson and twisted Poisson manifolds, of Lie algebroids with a Poisson or twisted Poisson structure, and of Poisson-Nijenhuis manifolds. A review of the spinor approach to the modular class concludes the paper.

  11. Use of Weighted Regressions on Time, Discharge, and Season to Assess Effectiveness of Agricultural and Environmental Best Management Practices in California and Nevada, USA

    Science.gov (United States)

    Domagalski, J. L.; Schlegel, B.; Hutchins, J.

    2014-12-01

    Long-term data sets on stream-water quality and discharge can be used to assess whether best management practices (BMPs) are restoring beneficial uses of impaired water as required under the Clean Water Act. In this study, we evaluated a greater than 20-year record of water quality from selected streams in the Central Valley (CV) of California and Lake Tahoe (California and Nevada, USA). The CV contains a mix of agricultural and urbanized land, while the Lake Tahoe area is mostly forested, with seasonal residents and tourism. Because nutrients and fine sediments cause a reduction in water clarity that impair Lake Tahoe, BMPs were implemented in the early 1990's, to reduce nitrogen and phosphorus loads. The CV does not have a current nutrient management plan, but numerous BMPs exist to reduce pesticide loads, and it was hypothesized that these programs could also reduce nutrient levels. In the CV and Lake Tahoe areas, nutrient concentrations, loads, and trends were estimated by using the recently developed Weighted Regressions on Time, Discharge, and Season (WRTDS) model. Sufficient data were available to compare trends during a voluntary and enforcement period for seven CV sites within the lower Sacramento and San Joaquin Basins. For six of the seven sites, flow-normalized mean annual concentrations of total phosphorus and nitrate decreased at a faster rate during the enforcement period than during the earlier voluntary period. Concentration changes during similar years and ranges of flow conditions suggest that BMPs designed for pesticides also reduced nutrient loads in the CV. A trend analysis using WRTDS was completed for six streams that enter Lake Tahoe during the late 1980's through 2008. The results of the model confirm that nutrient loading is influenced strongly by season, such as by spring runoff from snowmelt. The highest nutrient concentrations in the late 1980's and early 1990's correlate with high flows, followed by statistically significant decreases

  12. Deriving percentage study weights in multi-parameter meta-analysis models: with application to meta-regression, network meta-analysis and one-stage individual participant data models.

    Science.gov (United States)

    Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L

    2017-01-01

    Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).

  13. Quantile regression

    CERN Document Server

    Hao, Lingxin

    2007-01-01

    Quantile Regression, the first book of Hao and Naiman's two-book series, establishes the seldom recognized link between inequality studies and quantile regression models. Though separate methodological literature exists for each subject, the authors seek to explore the natural connections between this increasingly sought-after tool and research topics in the social sciences. Quantile regression as a method does not rely on assumptions as restrictive as those for the classical linear regression; though more traditional models such as least squares linear regression are more widely utilized, Hao

  14. Are normal-weight adolescents satisfied with their weight?

    Directory of Open Access Journals (Sweden)

    Mariana Contiero San Martini

    Full Text Available ABSTRACT: CONTEXT AND OBJECTIVE: The high prevalence of obesity has led to public policies for combating it. People with normal weight may gain greater awareness of this issue and change their perceptions of their weight. The aim of this study was to evaluate the prevalence of body weight dissatisfaction among normal-weight adolescents, according to demographic and socioeconomic variables, health-related behavior and morbidities. DESIGN AND SETTING: Population-based cross-sectional study that used data from a health survey conducted in the city of Campinas, São Paulo, in 2008-2009. METHODS: The prevalence and prevalence ratios of weight dissatisfaction were estimated according to independent variables, by means of simple and multiple Poisson regression. RESULTS: 573 normal-weight adolescents aged 10 to 19 years (mean age 14.7 years were analyzed. The prevalence of weight dissatisfaction was 43.7% (95% confidence interval, CI: 37.8-49.8. Higher prevalences of weight dissatisfaction were observed among females, individuals aged 15 to 19 years, those whose households had eight or more domestic appliances, former smokers, individuals who reported alcohol intake and those who had one or more chronic diseases. Lower prevalence of dissatisfaction was observed among adolescents living in substandard housing. Among the normal-weight adolescents, 26.1% wished to lose weight and 17.6% wished to gain weight. CONCLUSION: The results from this study indicate that even when weight is seen to be within the normal range, a high proportion of adolescents express dissatisfaction with their weight, especially females, older adolescents and those of higher socioeconomic level.

  15. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, M. U.

    2016-08-09

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  16. Rigid body dynamics on the Poisson torus

    Science.gov (United States)

    Richter, Peter H.

    2008-11-01

    The theory of rigid body motion with emphasis on the modifications introduced by a Cardan suspension is outlined. The configuration space is no longer SO(3) but a 3-torus; the equivalent of the Poisson sphere, after separation of an angular variable, is a Poisson torus. Iso-energy surfaces and their bifurcations are discussed. A universal Poincaré section method is proposed.

  17. Poisson Geometry from a Dirac perspective

    OpenAIRE

    Meinrenken, Eckhard

    2016-01-01

    We present proofs of classical results in Poisson geometry using techniques from Dirac geometry. This article is based on mini-courses at the Poisson summer school in Geneva, June 2016, and at the workshop "Quantum Groups and Gravity" at the University of Waterloo, April 2016.

  18. Speech parts as Poisson processes.

    Science.gov (United States)

    Badalamenti, A F

    2001-09-01

    This paper presents evidence that six of the seven parts of speech occur in written text as Poisson processes, simple or recurring. The six major parts are nouns, verbs, adjectives, adverbs, prepositions, and conjunctions, with the interjection occurring too infrequently to support a model. The data consist of more than the first 5000 words of works by four major authors coded to label the parts of speech, as well as periods (sentence terminators). Sentence length is measured via the period and found to be normally distributed with no stochastic model identified for its occurrence. The models for all six speech parts but the noun significantly distinguish some pairs of authors and likewise for the joint use of all words types. Any one author is significantly distinguished from any other by at least one word type and sentence length very significantly distinguishes each from all others. The variety of word type use, measured by Shannon entropy, builds to about 90% of its maximum possible value. The rate constants for nouns are close to the fractions of maximum entropy achieved. This finding together with the stochastic models and the relations among them suggest that the noun may be a primitive organizer of written text.

  19. Regression Basics

    CERN Document Server

    Kahane, Leo H

    2007-01-01

    Using a friendly, nontechnical approach, the Second Edition of Regression Basics introduces readers to the fundamentals of regression. Accessible to anyone with an introductory statistics background, this book builds from a simple two-variable model to a model of greater complexity. Author Leo H. Kahane weaves four engaging examples throughout the text to illustrate not only the techniques of regression but also how this empirical tool can be applied in creative ways to consider a broad array of topics. New to the Second Edition Offers greater coverage of simple panel-data estimation:

  20. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

    Science.gov (United States)

    Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

    1976-01-01

    A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

  1. Full characterization of the fractional Poisson process

    CERN Document Server

    Politi, Mauro; Scalas, Enrico

    2011-01-01

    The fractional Poisson process (FPP) is a counting process with independent and identically distributed inter-event times following the Mittag-Leffler distribution. This process is very useful in several fields of applied and theoretical physics including models for anomalous diffusion. Contrary to the well-known Poisson process, the fractional Poisson process does not have stationary and independent increments. It is not a L\\'evy process and it is not a Markov process. In this letter, we present formulae for its finite-dimensional distribution functions, fully characterizing the process. These exact analytical results are compared to Monte Carlo simulations.

  2. Assessing the influence of traffic-related air pollution on risk of term low birth weight on the basis of land-use-based regression models and measures of air toxics.

    Science.gov (United States)

    Ghosh, Jo Kay C; Wilhelm, Michelle; Su, Jason; Goldberg, Daniel; Cockburn, Myles; Jerrett, Michael; Ritz, Beate

    2012-06-15

    Few studies have examined associations of birth outcomes with toxic air pollutants (air toxics) in traffic exhaust. This study included 8,181 term low birth weight (LBW) children and 370,922 term normal-weight children born between January 1, 1995, and December 31, 2006, to women residing within 5 miles (8 km) of an air toxics monitoring station in Los Angeles County, California. Additionally, land-use-based regression (LUR)-modeled estimates of levels of nitric oxide, nitrogen dioxide, and nitrogen oxides were used to assess the influence of small-area variations in traffic pollution. The authors examined associations with term LBW (≥37 weeks' completed gestation and birth weight pollution in epidemiologic birth outcome studies.

  3. Noncommutative Poisson brackets on Loday algebras and related deformation quantization

    CERN Document Server

    UCHINO, Kyousuke

    2010-01-01

    We introduce a new type of algebra which is called a Loday-Poisson algebra. The class of the Loday-Poisson algebras forms a special subclass of Aguiar's dual-prePoisson algebas (\\cite{A}). We will prove that there exists a unique Loday-Poisson algebra over a Loday algebra, like the Lie-Poisson algebra over a Lie algebra. Thus, Loday-Poisson algebras are regarded as noncommutative analogues of Lie-Poisson algebras. We will show that the polinomial Loday-Poisson algebra is deformation quantizable and that the associated quantum algebra is Loday's associative dialgebra.

  4. Transition from Poisson to circular unitary ensemble

    Indian Academy of Sciences (India)

    Vinayak; Akhilesh Pandey

    2009-09-01

    Transitions to universality classes of random matrix ensembles have been useful in the study of weakly-broken symmetries in quantum chaotic systems. Transitions involving Poisson as the initial ensemble have been particularly interesting. The exact two-point correlation function was derived by one of the present authors for the Poisson to circular unitary ensemble (CUE) transition with uniform initial density. This is given in terms of a rescaled symmetry breaking parameter Λ. The same result was obtained for Poisson to Gaussian unitary ensemble (GUE) transition by Kunz and Shapiro, using the contour-integral method of Brezin and Hikami. We show that their method is applicable to Poisson to CUE transition with arbitrary initial density. Their method is also applicable to the more general ℓ CUE to CUE transition where CUE refers to the superposition of ℓ independent CUE spectra in arbitrary ratio.

  5. Negative Poisson's Ratio in Modern Functional Materials.

    Science.gov (United States)

    Huang, Chuanwei; Chen, Lang

    2016-10-01

    Materials with negative Poisson's ratio attract considerable attention due to their underlying intriguing physical properties and numerous promising applications, particularly in stringent environments such as aerospace and defense areas, because of their unconventional mechanical enhancements. Recent progress in materials with a negative Poisson's ratio are reviewed here, with the current state of research regarding both theory and experiment. The inter-relationship between the underlying structure and a negative Poisson's ratio is discussed in functional materials, including macroscopic bulk, low-dimensional nanoscale particles, films, sheets, or tubes. The coexistence and correlations with other negative indexes (such as negative compressibility and negative thermal expansion) are also addressed. Finally, open questions and future research opportunities are proposed for functional materials with negative Poisson's ratios.

  6. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  7. Some Characterizations of Mixed Poisson Processes

    CERN Document Server

    Lyberopoulos, D P

    2011-01-01

    A characterization of mixed Poisson processes in terms of disintegrations is proven. As a consequence some further characterizations of such processes via claim interarrival processes, martingales and claim measures are obtained.

  8. Modeling Events with Cascades of Poisson Processes

    CERN Document Server

    Simma, Aleksandr

    2012-01-01

    We present a probabilistic model of events in continuous time in which each event triggers a Poisson process of successor events. The ensemble of observed events is thereby modeled as a superposition of Poisson processes. Efficient inference is feasible under this model with an EM algorithm. Moreover, the EM algorithm can be implemented as a distributed algorithm, permitting the model to be applied to very large datasets. We apply these techniques to the modeling of Twitter messages and the revision history of Wikipedia.

  9. Poisson׳s ratio of arterial wall - Inconsistency of constitutive models with experimental data.

    Science.gov (United States)

    Skacel, Pavel; Bursa, Jiri

    2016-02-01

    Poisson׳s ratio of fibrous soft tissues is analyzed in this paper on the basis of constitutive models and experimental data. Three different up-to-date constitutive models accounting for the dispersion of fibre orientations are analyzed. Their predictions of the anisotropic Poisson׳s ratios are investigated under finite strain conditions together with the effects of specific orientation distribution functions and of other parameters. The applied constitutive models predict the tendency to lower (or even negative) out-of-plane Poisson׳s ratio. New experimental data of porcine arterial layer under uniaxial tension in orthogonal directions are also presented and compared with the theoretical predictions and other literature data. The results point out the typical features of recent constitutive models with fibres concentrated in circumferential-axial plane of arterial layers and their potential inconsistence with some experimental data. The volumetric (in)compressibility of arterial tissues is also discussed as an eventual and significant factor influencing this inconsistency.

  10. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  11. Logistic regression.

    Science.gov (United States)

    Nick, Todd G; Campbell, Kathleen M

    2007-01-01

    The Medical Subject Headings (MeSH) thesaurus used by the National Library of Medicine defines logistic regression models as "statistical models which describe the relationship between a qualitative dependent variable (that is, one which can take only certain discrete values, such as the presence or absence of a disease) and an independent variable." Logistic regression models are used to study effects of predictor variables on categorical outcomes and normally the outcome is binary, such as presence or absence of disease (e.g., non-Hodgkin's lymphoma), in which case the model is called a binary logistic model. When there are multiple predictors (e.g., risk factors and treatments) the model is referred to as a multiple or multivariable logistic regression model and is one of the most frequently used statistical model in medical journals. In this chapter, we examine both simple and multiple binary logistic regression models and present related issues, including interaction, categorical predictor variables, continuous predictor variables, and goodness of fit.

  12. Aortic and Hepatic Contrast Enhancement During Hepatic-Arterial and Portal Venous Phase Computed Tomography Scanning: Multivariate Linear Regression Analysis Using Age, Sex, Total Body Weight, Height, and Cardiac Output.

    Science.gov (United States)

    Masuda, Takanori; Nakaura, Takeshi; Funama, Yoshinori; Higaki, Toru; Kiguchi, Masao; Imada, Naoyuki; Sato, Tomoyasu; Awai, Kazuo

    We evaluated the effect of the age, sex, total body weight (TBW), height (HT) and cardiac output (CO) of patients on aortic and hepatic contrast enhancement during hepatic-arterial phase (HAP) and portal venous phase (PVP) computed tomography (CT) scanning. This prospective study received institutional review board approval; prior informed consent to participate was obtained from all 168 patients. All were examined using our routine protocol; the contrast material was 600 mg/kg iodine. Cardiac output was measured with a portable electrical velocimeter within 5 minutes of starting the CT scan. We calculated contrast enhancement (per gram of iodine: [INCREMENT]HU/gI) of the abdominal aorta during the HAP and of the liver parenchyma during the PVP. We performed univariate and multivariate linear regression analysis between all patient characteristics and the [INCREMENT]HU/gI of aortic- and liver parenchymal enhancement. Univariate linear regression analysis demonstrated statistically significant correlations between the [INCREMENT]HU/gI and the age, sex, TBW, HT, and CO (all P linear regression analysis showed that only the TBW and CO were of independent predictive value (P linear regression analysis only the TBW and CO were significantly correlated with aortic and liver parenchymal enhancement; the age, sex, and HT were not. The CO was the only independent factor affecting aortic and liver parenchymal enhancement at hepatic CT when the protocol was adjusted for the TBW.

  13. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  14. Counting people with low-level features and Bayesian regression.

    Science.gov (United States)

    Chan, Antoni B; Vasconcelos, Nuno

    2012-04-01

    An approach to the problem of estimating the size of inhomogeneous crowds, which are composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking is proposed. Instead, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic-texture motion model. A set of holistic low-level features is extracted from each segmented region, and a function that maps features into estimates of the number of people per segment is learned with Bayesian regression. Two Bayesian regression models are examined. The first is a combination of Gaussian process regression with a compound kernel, which accounts for both the global and local trends of the count mapping but is limited by the real-valued outputs that do not match the discrete counts. We address this limitation with a second model, which is based on a Bayesian treatment of Poisson regression that introduces a prior distribution on the linear weights of the model. Since exact inference is analytically intractable, a closed-form approximation is derived that is computationally efficient and kernelizable, enabling the representation of nonlinear functions. An approximate marginal likelihood is also derived for kernel hyperparameter learning. The two regression-based crowd counting methods are evaluated on a large pedestrian data set, containing very distinct camera views, pedestrian traffic, and outliers, such as bikes or skateboarders. Experimental results show that regression-based counts are accurate regardless of the crowd size, outperforming the count estimates produced by state-of-the-art pedestrian detectors. Results on 2 h of video demonstrate the efficiency and robustness of the regression-based crowd size estimation over long periods of time.

  15. Modeling the potential distribution of shallow-seated landslides using the weights of evidence method and a logistic regression model:a case study of the Sabae Area, Japan

    Institute of Scientific and Technical Information of China (English)

    Ru-Hua SONG; Daimaru HIROMU; Abe KAZUTOKI; Kurokawa USIO; Matsuura SUMIO

    2008-01-01

    A number of statistical methods are typically used to effectively predict potential landslide distributions.In this study two multivariate statistical analysis methods were used (weights of evidence and logistic regression) to predict the potential distribution of shallow-seated landslides in the Kamikawachi area of Sabae City,Fukui Prefecture,Japan.First,the dependent variable (shallow-seated landslides) was divided into presence and absence,and the independent variables (environmental factors such as slope and altitude) were categorized according to their characteristics.Then,using the weights of evidence (WE) method,the weights of pairs comprising presence (w+(i) ) or absence (w-(i) ),and the contrast values for each category of independent variable (evidence),were calculated.Using the method that integrated the weights of evidence method and a logistic regression model,score values were calculated for each category of independent variable.Based on these contrast values,three models were selected to sum the score values of every gird in the study area.According to a receiver operating characteristic curve analysis (ROC),model 2 yielded the best fit for predicting the potential distribution of shallow-seated landslide hazards,with 89% correctness and a 54.5% hit ratio when the occurrence probability (OP) of landslides was 70%.The model was tested using data from an area close to the study region,and showed 94% correctness and a hit ratio of 45.7% when the OP of landslides was 70%.Finally,the potential distribution of shallow-seated landslides,based on the OP,was mapped using a geographical information system.

  16. Generalized Poisson distribution: the property of mixture of Poisson and comparison with negative binomial distribution.

    Science.gov (United States)

    Joe, Harry; Zhu, Rong

    2005-04-01

    We prove that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989). Because we find that the fits to count data of the generalized Poisson and negative binomial distributions are often similar, to understand their differences, we compare the probability mass functions and skewnesses of the generalized Poisson and negative binomial distributions with the first two moments fixed. They have slight differences in many situations, but their zero-inflated distributions, with masses at zero, means and variances fixed, can differ more. These probabilistic comparisons are helpful in selecting a better fitting distribution for modelling count data with long right tails. Through a real example of count data with large zero fraction, we illustrate how the generalized Poisson and negative binomial distributions as well as their zero-inflated distributions can be discriminated.

  17. Compressed sensing performance bounds under Poisson noise

    CERN Document Server

    Raginsky, Maxim; Marcia, Roummel F; Willett, Rebecca M

    2009-01-01

    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical $\\ell_2$--$\\ell_1$ minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective functi...

  18. The oligarchic structure of Paretian Poisson processes

    Science.gov (United States)

    Eliazar, I.; Klafter, J.

    2008-08-01

    Paretian Poisson processes are a mathematical model of random fractal populations governed by Paretian power law tail statistics, and connect together and underlie elemental issues in statistical physics. Considering Paretian Poisson processes to represent the wealth of individuals in human populations, we explore their oligarchic structure via the analysis of the following random ratios: the aggregate wealth of the oligarchs ranked from m+1 to n, measured relative to the wealth of the m-th oligarch (n> m). A mean analysis and a stochastic-limit analysis (as n→∞) of these ratios are conducted. We obtain closed-form results which turn out to be highly contingent on the fractal exponent of the Paretian Poisson process considered.

  19. The Space-Fractional Poisson Process

    CERN Document Server

    Orsingher, Enzo

    2011-01-01

    In this paper we introduce the space-fractional Poisson process whose state probabilities $p_k^\\alpha(t)$, $t>0$, $\\alpha \\in (0,1]$, are governed by the equations $(\\mathrm d/\\mathrm dt)p_k(t) = -\\lambda^\\alpha (1-B)p_k^\\alpha(t)$, where $(1-B)^\\alpha$ is the fractional difference operator found in the study of time series analysis. We explicitly obtain the distributions $p_k^\\alpha(t)$, the probability generating functions $G_\\alpha(u,t)$, which are also expressed as distributions of the minimum of i.i.d.\\ uniform random variables. The comparison with the time-fractional Poisson process is investigated and finally, we arrive at the more general space-time fractional Poisson process of which we give the explicit distribution.

  20. The Isolation Time of Poisson Brownian motions

    CERN Document Server

    Peres, Yuval; Stauffer, Alexandre

    2011-01-01

    Let the nodes of a Poisson point process move independently in $\\R^d$ according to Brownian motions. We study the isolation time for a target particle that is placed at the origin, namely how long it takes until there is no node of the Poisson point process within distance $r$ of it. We obtain asymptotics for the tail probability which are tight up to constants in the exponent in dimension $d\\geq 3$ and tight up to logarithmic factors in the exponent for dimensions $d=1,2$.

  1. Software Cost Estimation Method Based on Weighted Partial Least Squares Regression%基于加权偏最小二乘回归的软件成本估算方法

    Institute of Scientific and Technical Information of China (English)

    刘海涛; 魏汝祥; 蒋国萍

    2012-01-01

    For the excessiveness of operating factors and multi-correlation of variables in software cost estimation, this paer proposes a software cost estimation method based on Partial Least Squares Regression(PLSR). After the variables are weighted, an integrated index named analogue deviation is defined to describe the approximation of data samples. Then an adaptive weight is assigned to sample according to the approximation, and the optimal partial least squares latent variables and weight parameters are calculated by traversal searching. Experimental results show that the prediction error is reduced by 73.61% and 32.34% than multiple linear regression and PLSR respectively.%针对软件成本估算中影响因素较多且自变量间存在多重相关性的特点,提出一种基于加权偏最小二乘回归(PLSR)的软件成本估算方法.定义属性权重,得到描述软件历史数据相似度的加权相似离度.通过计算样本相似度自适应地为样本分配权值,采用遍历搜索的方式确定最优主成分及权值分配参数.实验结果表明,该方法的估算误差比多元线性回归方法减少73.61%,比全局PLSR方法减少32.34%.

  2. A class of CTRWs: Compound fractional Poisson processes

    CERN Document Server

    Scalas, Enrico

    2011-01-01

    This chapter is an attempt to present a mathematical theory of compound fractional Poisson processes. The chapter begins with the characterization of a well-known L\\'evy process: The compound Poisson process. The semi-Markov extension of the compound Poisson process naturally leads to the compound fractional Poisson process, where the Poisson counting process is replaced by the Mittag-Leffler counting process also known as fractional Poisson process. This process is no longer Markovian and L\\'evy. However, several analytical results are available and some of them are discussed here. The functional limit of the compound Poisson process is an $\\alpha$-stable L\\'evy process, whereas in the case of the compound fractional Poisson process, one gets an $\\alpha$-stable L\\'evy process subordinated to the fractional Poisson process.

  3. Affine Poisson Groups and WZW Model

    Directory of Open Access Journals (Sweden)

    Ctirad Klimcík

    2008-01-01

    Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations.

  4. Transportation inequalities: From Poisson to Gibbs measures

    CERN Document Server

    Ma, Yutao; Wang, Xinyu; Wu, Liming; 10.3150/00-BEJ268

    2011-01-01

    We establish an optimal transportation inequality for the Poisson measure on the configuration space. Furthermore, under the Dobrushin uniqueness condition, we obtain a sharp transportation inequality for the Gibbs measure on $\\mathbb{N}^{\\Lambda}$ or the continuum Gibbs measure on the configuration space.

  5. Poisson boundaries over locally compact quantum groups

    CERN Document Server

    Kalantar, Mehrdad; Ruan, Zhong-Jin

    2011-01-01

    We present versions of several classical results on harmonic functions and Poisson boundaries in the setting of locally compact quantum groups $\\mathbb{G}$. In particular, the Choquet-Deny theorem holds for compact quantum groups; also, the result of Kaimanovich-Vershik and Rosenblatt, which characterizes group amenability in terms of harmonic functions, answering a conjecture by Furstenberg, admits a non-commutative analogue in the separable case. We also explore the relation between classical and quantum Poisson boundaries by investigating the spectrum of the quantum group. We apply this machinery to find a concrete realization of the Poisson boundaries of the compact quantum group $SU_{q}(2)$ arising from measures on its spectrum. We further show that the Poisson boundary of the natural Markov operator extension of the convolution action of a quantum probability measure $\\mu$ on $L_\\infty(\\mathbb{G})$ to $B(L_2(\\mathbb{G}))$, as introduced and studied - for general completely bounded multipliers on $L_1(\\m...

  6. Continental moisture recycling as a Poisson process

    Directory of Open Access Journals (Sweden)

    H. F. Goessling

    2013-04-01

    Full Text Available On their journey across large land masses, water molecules experience a number of precipitation-evaporation cycles (recycling events. We derive analytically the frequency distributions of recycling events for the water molecules contained in a given air parcel. Given the validity of certain simplifying assumptions, continental moisture recycling is shown to develop either into a Poisson distribution or a geometric distribution. We distinguish two cases: in case (A recycling events are counted since the water molecules were last advected across the ocean-land boundary. In case (B recycling events are counted since the water molecules were last evaporated from the ocean. For case B we show by means of a simple scale analysis that, given the conditions on Earth, realistic frequency distributions may be regarded as a mixture of a Poisson distribution and a geometric distribution. By contrast, in case A the Poisson distribution generally appears as a reasonable approximation. This conclusion is consistent with the simulation results of an earlier study where an atmospheric general circulation model equipped with water vapor tracers was used. Our results demonstrate that continental moisture recycling can be interpreted as a Poisson process.

  7. Bayesian credible interval construction for Poisson statistics

    Institute of Scientific and Technical Information of China (English)

    ZHU Yong-Sheng

    2008-01-01

    The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.

  8. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    2010-01-01

    are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and...

  9. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    , and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can...

  10. Evolutionary inference via the Poisson Indel Process.

    Science.gov (United States)

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  11. Computation of confidence intervals for Poisson processes

    Science.gov (United States)

    Aguilar-Saavedra, J. A.

    2000-07-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  12. Computation of confidence intervals for Poisson processes

    CERN Document Server

    Aguilar-Saavedra, J A

    2000-01-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  13. Exit problems for oscillating compound Poisson process

    CERN Document Server

    Kadankova, Tetyana

    2011-01-01

    In this article we determine the Laplace transforms of the main boundary functionals of the oscillating compound Poisson process. These are the first passage time of the level, the joint distribution of the first exit time from the interval and the value of the overshoot through the boundary. Under certain conditions we establish the asymptotic behaviour of the mentioned functionals.

  14. Homological unimodularity and Calabi-Yau condition for Poisson algebras

    Science.gov (United States)

    Lü, Jiafeng; Wang, Xingting; Zhuang, Guangbin

    2017-09-01

    In this paper, we show that the twisted Poincaré duality between Poisson homology and cohomology can be derived from the Serre invertible bimodule. This gives another definition of a unimodular Poisson algebra in terms of its Poisson Picard group. We also achieve twisted Poincaré duality for Hochschild (co)homology of Poisson bimodules using rigid dualizing complex. For a smooth Poisson affine variety with the trivial canonical bundle, we prove that its enveloping algebra is a Calabi-Yau algebra if the Poisson structure is unimodular.

  15. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    Energy Technology Data Exchange (ETDEWEB)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S. [Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel (Switzerland); Genovese, L. [University of Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Andreussi, O. [Institute of Computational Science, Università della Svizzera Italiana, Via Giuseppe Buffi 13, CH-6904 Lugano (Switzerland); Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland); Marzari, N. [Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland)

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  16. Generalized master equations for non-Poisson dynamics on networks

    Science.gov (United States)

    Hoffmann, Till; Porter, Mason A.; Lambiotte, Renaud

    2012-10-01

    The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.

  17. On higher Poisson and Koszul--Schouten brackets

    CERN Document Server

    Bruce, Andrew James

    2009-01-01

    In this note we show how to construct a homotopy BV-algebra on the algebra of differential forms over a higher Poisson manifold. The Lie derivative along the higher Poisson structure provides the generating operator.

  18. On the Confidence Interval for the parameter of Poisson Distribution

    CERN Document Server

    Bityukov, S I; Taperechkina, V A

    2000-01-01

    In present paper the possibility of construction of continuous analogue of Poisson distribution with the search of bounds of confidence intervals for parameter of Poisson distribution is discussed and the results of numerical construction of confidence intervals are presented.

  19. Application of a Weighted Regression Model for Reporting Nutrient and Sediment Concentrations, Fluxes, and Trends in Concentration and Flux for the Chesapeake Bay Nontidal Water-Quality Monitoring Network, Results Through Water Year 2012

    Science.gov (United States)

    Chanat, Jeffrey G.; Moyer, Douglas L.; Blomquist, Joel D.; Hyer, Kenneth E.; Langland, Michael J.

    2016-01-13

    In the Chesapeake Bay watershed, estimated fluxes of nutrients and sediment from the bay’s nontidal tributaries into the estuary are the foundation of decision making to meet reductions prescribed by the Chesapeake Bay Total Maximum Daily Load (TMDL) and are often the basis for refining scientific understanding of the watershed-scale processes that influence the delivery of these constituents to the bay. Two regression-based flux and trend estimation models, ESTIMATOR and Weighted Regressions on Time, Discharge, and Season (WRTDS), were compared using data from 80 watersheds in the Chesapeake Bay Nontidal Water-Quality Monitoring Network (CBNTN). The watersheds range in size from 62 to 70,189 square kilometers and record lengths range from 6 to 28 years. ESTIMATOR is a constant-parameter model that estimates trends only in concentration; WRTDS uses variable parameters estimated with weighted regression, and estimates trends in both concentration and flux. WRTDS had greater explanatory power than ESTIMATOR, with the greatest degree of improvement evident for records longer than 25 years (30 stations; improvement in median model R2= 0.06 for total nitrogen, 0.08 for total phosphorus, and 0.05 for sediment) and the least degree of improvement for records of less than 10 years, for which the two models performed nearly equally. Flux bias statistics were comparable or lower (more favorable) for WRTDS for any record length; for 30 stations with records longer than 25 years, the greatest degree of improvement was evident for sediment (decrease of 0.17 in median statistic) and total phosphorus (decrease of 0.05). The overall between-station pattern in concentration trend direction and magnitude for all constituents was roughly similar for both models. A detailed case study revealed that trends in concentration estimated by WRTDS can operationally be viewed as a less-constrained equivalent to trends in concentration estimated by ESTIMATOR. Estimates of annual mean flow

  20. Poisson Bracket on the Space of Histories

    CERN Document Server

    Marolf, D

    1994-01-01

    We extend the Poisson bracket from a Lie bracket of phase space functions to a Lie bracket of functions on the space of canonical histories and investigate the resulting algebras. Typically, such extensions define corresponding Lie algebras on the space of Lagrangian histories via pull back to a space of partial solutions. These are the same spaces of histories studied with regard to path integration and decoherence. Such spaces of histories are familiar from path integration and some studies of decoherence. For gauge systems, we extend both the canonical and reduced Poisson brackets to the full space of histories. We then comment on the use of such algebras in time reparameterization invariant systems and systems with a Gribov ambiguity, though our main goal is to introduce concepts and techniques for use in a companion paper.

  1. Model selection for Poisson processes with covariates

    CERN Document Server

    Sart, Mathieu

    2011-01-01

    We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.

  2. The local Poisson hypothesis for solar flares

    CERN Document Server

    Wheatland, M S

    2001-01-01

    The question of whether flares occur as a Poisson process has important consequences for flare physics. Recently Lepreti et al. presented evidence for local departure from Poisson statistics in the Geostationary Operational Environmental Satellite (GOES) X-ray flare catalog. Here it is argued that this effect arises from a selection effect inherent in the soft X-ray observations; namely that the slow decay of enhanced flux following a large flare makes detection of subsequent flares less likely. It is also shown that the power-law tail of the GOES waiting-time distribution varies with the solar cycle. This counts against any intrinsic significance to the appearance of a power law, or to the value of its index.

  3. Stabilities for nonisentropic Euler-Poisson equations.

    Science.gov (United States)

    Cheung, Ka Luen; Wong, Sen

    2015-01-01

    We establish the stabilities and blowup results for the nonisentropic Euler-Poisson equations by the energy method. By analysing the second inertia, we show that the classical solutions of the system with attractive forces blow up in finite time in some special dimensions when the energy is negative. Moreover, we obtain the stabilities results for the system in the cases of attractive and repulsive forces.

  4. Continental moisture recycling as a Poisson process

    OpenAIRE

    2013-01-01

    On their journey across large land masses, water molecules experience a number of precipitation-evaporation cycles (recycling events). We derive analytically the frequency distributions of recycling events for the water molecules contained in a given air parcel. Given the validity of certain simplifying assumptions, continental moisture recycling is shown to develop either into a Poisson distribution or a geometric distribution. We distinguish two cases: in case (A) recycling events a...

  5. Continental moisture recycling as a Poisson process

    OpenAIRE

    2013-01-01

    On their journey over large land masses, water molecules experience a number of precipitation–evaporation cycles (recycling events). We derive analytically the frequency distributions of recycling events for the water molecules contained in a given air parcel. Given the validity of certain simplifying assumptions, the frequency distribution of recycling events is shown to develop either into a Poisson distribution or a geometric distribution. We distingu...

  6. A New Echeloned Poisson Series Processor (EPSP)

    Science.gov (United States)

    Ivanova, Tamara

    2001-07-01

    A specialized Echeloned Poisson Series Processor (EPSP) is proposed. It is a typical software for the implementation of analytical algorithms of Celestial Mechanics. EPSP is designed for manipulating long polynomial-trigonometric series with literal divisors. The coefficients of these echeloned series are the rational or floating-point numbers. The Keplerian processor and analytical generator of special celestial mechanics functions based on the EPSP are also developed.

  7. Irreversible thermodynamics of Poisson processes with reaction.

    Science.gov (United States)

    Méndez, V; Fort, J

    1999-11-01

    A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.

  8. Irreversible thermodynamics of Poisson processes with reaction

    Science.gov (United States)

    Méndez, Vicenç; Fort, Joaquim

    1999-11-01

    A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.

  9. An alternative hyper-Poisson distribution

    Directory of Open Access Journals (Sweden)

    C. Satheesh Kumar

    2013-05-01

    Full Text Available An alternative form of hyper-Poisson distribution is introduced through its probability mass function and studies some of its important aspects such as mean, variance, expressions for its raw moments, factorial moments, probability generating function and recursion formulae for its probabilities, raw moments and factorial moments. The estimation of the parameters of the distribution by various methods are considered and illustrated using some real life data sets.

  10. Alternative Forms of Compound Fractional Poisson Processes

    Directory of Open Access Journals (Sweden)

    Luisa Beghin

    2012-01-01

    Full Text Available We study here different fractional versions of the compound Poisson process. The fractionality is introduced in the counting process representing the number of jumps as well as in the density of the jumps themselves. The corresponding distributions are obtained explicitly and proved to be solution of fractional equations of order less than one. Only in the final case treated in this paper, where the number of jumps is given by the fractional-difference Poisson process defined in Orsingher and Polito (2012, we have a fractional driving equation, with respect to the time argument, with order greater than one. Moreover, in this case, the compound Poisson process is Markovian and this is also true for the corresponding limiting process. All the processes considered here are proved to be compositions of continuous time random walks with stable processes (or inverse stable subordinators. These subordinating relationships hold, not only in the limit, but also in the finite domain. In some cases the densities satisfy master equations which are the fractional analogues of the well-known Kolmogorov one.

  11. Differential Poisson Sigma Models with Extended Supersymmetry

    CERN Document Server

    Arias, Cesar; Torres-Gomez, Alexander

    2016-01-01

    The induced two-dimensional topological N=1 supersymmetric sigma model on a differential Poisson manifold M presented in arXiv:1503.05625 is shown to be a special case of the induced Poisson sigma model on the bi-graded supermanifold T[0,1]M. The bi-degree comprises the standard N-valued target space degree, corresponding to the form degree on the worldsheet, and an additional Z-valued fermion number, corresponding to the degree in the differential graded algebra of forms on M. The N=1 supersymmetry stems from the compatibility between the (extended) differential Poisson bracket and the de Rham differential on M. The latter is mapped to a nilpotent vector field Q of bi-degree (0,1) on T*[1,0](T[0,1]M), and the covariant Hamiltonian action is Q-exact. New extended supersymmetries arise as inner derivatives along special bosonic Killing vectors on M that induce Killing supervector fields of bi-degree (0,-1) on T*[1,0](T[0,1]M).

  12. Connectivity in Sub-Poisson Networks

    CERN Document Server

    Blaszczyszyn, Bartlomiej

    2010-01-01

    We consider a class of point processes (pp), which we call {\\em sub-Poisson}; these are pp that can be directionally-convexly ($dcx$) dominated by some Poisson pp. The $dcx$ order has already been shown useful in comparing various point process characteristics, including Ripley's and correlation functions as well as shot-noise fields generated by pp, indicating in particular that smaller in the $dcx$ order processes exhibit more regularity (less clustering, less voids) in the repartition of their points. Using these results, in this paper we study the impact of the $dcx$ ordering of pp on the properties of two continuum percolation models, which have been proposed in the literature to address macroscopic connectivity properties of large wireless networks. As the first main result of this paper, we extend the classical result on the existence of phase transition in the percolation of the Gilbert's graph (called also the Boolean model), generated by a homogeneous Poisson pp, to the class of homogeneous sub-Pois...

  13. Gauge Poisson representations for birth/death master equations

    CERN Document Server

    Drummond, P D

    2002-01-01

    Poisson representation techniques provide a powerful method for mapping master equations for birth/death processes - found in many fields of physics, chemistry and biology - into more tractable stochastic differential equations. However, the usual expansion is not exact in the presence of boundary terms, which commonly occur when the differential equations are nonlinear. In this paper, a stochastic gauge technique is introduced that eliminates boundary terms, to give an exact representation as a weighted rate equation with stochastic terms. These methods provide novel techniques for calculating and understanding the effects of number correlations in systems that have a master equation description. As examples, correlations induced by strong mutations in genetics, and the astrophysical problem of molecule formation on microscopic grain surfaces are analyzed. Exact analytic results are obtained that can be compared with numerical simulations, demonstrating that stochastic gauge techniques can give exact results...

  14. Estimação de componentes de co-variância para pesos corporais do nascimento aos 365 dias de idade de bovinos Guzerá empregando-se modelos de regressão aleatória Estimates of covariance components for body weights from birth to 365 days of age in Guzera cattle, using random regression models

    Directory of Open Access Journals (Sweden)

    Luciele Cristina Pelicioni

    2009-01-01

    Full Text Available Um total de 19.770 pesos corporais de bovinos Guzerá, do nascimento aos 365 dias de idade, pertencentes ao banco de dados da Associação Brasileira dos Criadores de Zebu (ABCZ foi analisado com os objetivos de comparar diferentes estruturas de variâncias residuais, considerando 1, 18, 28 e 53 classes residuais e funções de variância de ordens quadrática a quíntica; e estimar funções de co-variância de diferentes ordens para os efeitos genético aditivo direto, genético materno, de ambiente permanente de animal e de mãe e parâmetros genéticos para os pesos corporais usando modelos de regressão aleatória. Os efeitos aleatórios foram modelados por regressões polinomiais em escala de Legendre com ordens variando de linear a quártica. Os modelos foram comparados pelo teste de razão de verossimilhança e pelos critérios de Informação de Akaike e de Informação Bayesiano de Schwarz. O modelo com 18 classes heterogêneas foi o que melhor se ajustou às variâncias residuais, de acordo com os testes estatísticos, porém, o modelo com função de variância de quinta ordem também mostrou-se apropriado. Os valores de herdabilidade direta estimados foram maiores que os encontrados na literatura, variando de 0,04 a 0,53, mas seguiram a mesma tendência dos estimados pelas análises unicaracterísticas. A seleção para peso em qualquer idade melhoraria o peso em todas as idades no intervalo estudado.A total of 19,770 body weight records of Guzera cattle, measured from birth to 365 days of age and supplied by the Brazilian Zebu Breeders Association, was analyzed with the following objectives of: 1 to compare different residual variances through step functions with 1, 18, 28 and 53 classes and through variance functions with orders ranging from two to five using ordinary polynomials and 2 to estimate covariance functions considering different orders for direct additive genetic effects, animal permanent environmental and maternal

  15. Leibniz Color Algebra and Leibniz Poisson Color Algebra%Leibniz Color代数和Leibniz Poisson Color代数

    Institute of Scientific and Technical Information of China (English)

    高齐; 王聪; 张庆成

    2014-01-01

    This paper presents the definition of Leibniz color algebra and Leibniz Poisson color algebra, and the method to construct the Leibniz color algebra and the Leibniz Poisson color algebra by the newly defined product.%定义了Leibniz color代数和 Leibniz Poisson color 代数,并通过新定义的乘法运算得到了构造Leibniz color代数和Leibniz Poisson color代数的方法。

  16. Some Simple Computational Formulas for Multiple Regression

    Science.gov (United States)

    Aiken, Lewis R., Jr.

    1974-01-01

    Short-cut formulas are presented for direct computation of the beta weights, the standard errors of the beta weights, and the multiple correlation coefficient for multiple regression problems involving three independent variables and one dependent variable. (Author)

  17. Simulation on Poisson and negative binomial models of count road accident modeling

    Science.gov (United States)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  18. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    Science.gov (United States)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  19. A Nonlocal Poisson-Fermi Model for Ionic Solvent

    CERN Document Server

    Xie, Dexuan; Eisenberg, Bob; Scott, L Ridgway

    2016-01-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-type kernel function. Moreover, the Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Finally, numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  20. Nonlocal Poisson-Fermi model for ionic solvent.

    Science.gov (United States)

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  1. Numerical assessment for Poisson image blending problem using MSOR iteration via five-point Laplacian operator

    Science.gov (United States)

    Jeng Hong, Eng; Saudi, Azali; Sulaiman, Jumat

    2017-09-01

    The demand for image editing in the field of image processing has been increased throughout the world. One of the most famous equations for solving image editing problem is Poisson equation. Due to the advantages of the Successive Over Relaxation (SOR) iterative method with one weighted parameter, this paper examined the efficiency of the Modified Successive Over Relaxation (MSOR) iterative method for solving Poisson image blending problem. As we know, this iterative method requires two weighted parameters by considering the Red-Black ordering strategy, thus comparison of Jacobi, Gauss-Seidel and MSOR iterative methods in solving Poisson image blending problem is carried out in this study. The performance of these iterative methods to solve the problem is examined through assessing the number of iterations and computational time taken. Based on the numerical assessment over several experiments, the findings had shown that MSOR iterative method is able to solve Poisson image blending problem effectively than the other two methods which it requires fewer number of iterations and lesser computational time.

  2. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  3. On the fractal characterization of Paretian Poisson processes

    Science.gov (United States)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  4. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    CERN Document Server

    Orsingher, Enzo

    2011-01-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes $N_\\alpha(t)$, $N_\\beta(t)$, $t>0$, we show that $N_\\alpha(N_\\beta(t)) \\overset{\\text{d}}{=} \\sum_{j=1}^{N_\\beta(t)} X_j$, where the $X_j$s are Poisson random variables. We present a series of similar cases, the most general of which is the one in which the outer process is Poisson and the inner one is a nonlinear fractional birth process. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form $N_\\alpha(\\tau_k^\

  5. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    Science.gov (United States)

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  6. Elastic wave velocities and Poisson`s ratio in reservoir rocks; Choryugan no danseiha sokudo to Poisson hi

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, Y. [Japan National Oil Corp., Tokyo (Japan)

    1998-04-01

    This paper discusses the relationship between elastic wave velocities and physical properties of reservoir rocks. For sandstones, the elastic wave velocity decreases with increasing the porosity and the content of clay minerals. For rocks containing heavy oil, the P-wave velocity decreases with increasing the temperature. The P-wave velocity under dry condition is much more lower than that under water saturated condition. When there are a few percent of gas in pores against the water saturated condition, the P-wave velocity decreases rapidly. It is almost constant under the lower water saturation factor. The S-wave velocity is almost constant independent of the water saturation factor. Accordingly, the water saturation factor can not be estimated from the elastic wave velocity at the water saturation factor between 0% and 96%. The Poisson`s ratio also greatly decreases at the water saturation factor between 96% and 100%, but it is almost constant under the lower water saturation factor. The elastic wave velocity increases with increasing the pressure or increasing the depth. Since closure of cracks by pressure is inhibited due to high pore pressure, degree of increase in the elastic wave velocity is reduced. 14 refs., 6 figs.

  7. A new bivariate negative binomial regression model

    Science.gov (United States)

    Faroughi, Pouya; Ismail, Noriszura

    2014-12-01

    This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.

  8. Analogues of Euler and Poisson Summation Formulae

    Indian Academy of Sciences (India)

    Vivek V Rane

    2003-08-01

    Euler–Maclaurin and Poisson analogues of the summations $\\sum_{a < n ≤ b}(n)f(n), \\sum_{a < n ≤ b}d(n) f(n), \\sum_{a < n ≤ b}d(n)(n) f(n)$ have been obtained in a unified manner, where (()) is a periodic complex sequence; () is the divisor function and () is a sufficiently smooth function on [, ]. We also state a generalised Abel's summation formula, generalised Euler's summation formula and Euler's summation formula in several variables.

  9. The Poisson ratio of crystalline surfaces

    OpenAIRE

    Falcioni, Marco; Bowick, Mark; Guitter, Emmanuel; Thorleifsson, Gudmar

    1996-01-01

    A remarkable theoretical prediction for a crystalline (polymerized) surface is that its Poisson ratio (\\sigma) is negative. Using a large scale Monte Carlo simulation of a simple model of such surfaces we show that this is indeed true. The precise numerical value we find is (\\sigma \\simeq -0.32) on a (128^2) lattice at bending rigidity (kappa = 1.1). This is in excellent agreement with the prediction (\\sigma = -1/3) following from the self-consistent screening approximation of Le Doussal and ...

  10. Poisson sigma models and deformation quantization

    CERN Document Server

    Cattaneo, A S; Cattaneo, Alberto S.; Felder, Giovanni

    2001-01-01

    This is a review aimed at a physics audience on the relation between Poisson sigma models on surfaces with boundary and deformation quantization. These models are topological open string theories. In the classical Hamiltonian approach, we describe the reduced phase space and its structures (symplectic groupoid), explaining in particular the classical origin of the non-commutativity of the string end-point coordinates. We also review the perturbative Lagrangian approach and its connection with Kontsevich's star product. Finally we comment on the relation between the two approaches.

  11. Deterministic Thinning of Finite Poisson Processes

    CERN Document Server

    Angel, Omer; Soo, Terry

    2009-01-01

    Let Pi and Gamma be homogeneous Poisson point processes on a fixed set of finite volume. We prove a necessary and sufficient condition on the two intensities for the existence of a coupling of Pi and Gamma such that Gamma is a deterministic function of Pi, and all points of Gamma are points of Pi. The condition exhibits a surprising lack of monotonicity. However, in the limit of large intensities, the coupling exists if and only if the expected number of points is at least one greater in Pi than in Gamma.

  12. Around Poisson--Mehler summation formula

    CERN Document Server

    Szabłowski, Paweł J

    2011-01-01

    We study some simple generalizations of the Poisson-Mehler summation formula (PM). In particular we exploit farther, the recently obtained equality {\\gamma}_{m,n}(x,y|t,q) = {\\gamma}_{0,0}(x,y|t,q)Q_{m,n}(x,y|t,q) where {\\gamma}_{m,n}(x,y|t,q) = \\Sigma_{i\\geq0}((t^{i})/([i]_{q}!))H_{i+n}(x|q)H_{m+i}(y|q), {H_{n}(x|q)}_{n\\geq-1} are the so called q-Hermite polynomials and {Q_{m,n}(x,y|t,q)}_{n,m\\geq0} are certain polynomials in x,y of order m+n being rational functions in t and q. We study properties of polynomials Q_{m,n}(x,y|t,q) expressing them with the help the so called Al-Salam--Chihara (ASC) polynomials and using them in expansion of the reciprocal of the right hand side of the Poisson-Mehler formula. We prove also some similar in nature equalities like e.g. the following \\Sigma_{i\\geq0}((t^{i})/([i]_{q}!))H_{n+i}(x|q) = H_{n}(x|t,q)\\Sigma_{i\\geq0}((t^{i})/([i]_{q}!))H_{i}(x|q), where H_{n}(x|t,q) is the so called big q-Hermite polynomial. We prove similar equalities involving big q-Hermite and ASC poly...

  13. Renewal characterization of Markov modulated Poisson processes

    Directory of Open Access Journals (Sweden)

    Marcel F. Neuts

    1989-01-01

    Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied.

  14. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected by...

  15. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  16. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  17. Parameter Estimation In Astronomy With Poisson-Distributed Data. I. The X 2 over gamma Statistic

    CERN Document Server

    Mighell, K J

    1999-01-01

    Applying the standard weighted mean formula, [sum_i {n_i sigma^{-2}_i}] / [sum_i {sigma^{-2}_i}], to determine the weighted mean of data, n_i, drawn from a Poisson distribution, will, on average, underestimate the true mean by ~1 for all true mean values larger than ~3 when the common assumption is made that the error of the ith observation is sigma_i = max(sqrt{n_i},1). This small, but statistically significant offset, explains the long-known observation that chi-square minimization techniques which use the modified Neyman's chi-square statistic, chi^2_{N} equiv sum_i (n_i-y_i)^2 / max(n_i,1), to compare Poisson-distributed data with model values, y_i, will typically predict a total number of counts that underestimates the true total by about 1 count per bin. Based on my finding that the weighted mean of data drawn from a Poisson distribution can be determined using the formula [sum_i [n_i + min(n_i,1)] (n_i+1)^{-1}] / [sum_i (n_i+1)^{-1}], I propose that a new chi-square statistic, chi^2_gamma equiv sum_i [...

  18. A Poisson log-bilinear regression approach to the construction of projected lifetables

    NARCIS (Netherlands)

    Brouhns, N.; Denuit, M.; Vermunt, J.K.

    2002-01-01

    This paper implements Wilmoth's [Computational methods for fitting and extrapolating the Lee¿Carter model of mortality change, Technical report, Department of Demography, University of California, Berkeley] and Alho's [North American Actuarial Journal 4 (2000) 91] recommendation for improving the Le

  19. Choice of seond-order response surface designs for logistic and Poisson regression models

    OpenAIRE

    Johnson, Rachel T.; Montgomery, Douglas C.

    2009-01-01

    Response surface methodology is widely used for process development and optimisation, product design, and as part of the modern framework for robust parameter design. For normally distributed responses, the standard second-order designs such as the central composite design and the Box-Behnken design have realtively high D and G efficiencies. In situations where these designs are inappropriate, standard computer software cen be used to construct D-optimal and I-optimal designs for fitting s...

  20. Non-Poisson Processes: Regression to Equilibrium Versus Equilibrium Correlation Functions

    Science.gov (United States)

    2007-11-02

    76203-1427, USA cDipartimento di Fisica dell’Università di Pisa and INFM, Via Buonarroti 2, 56127 Pisa, Italy dIstituto dei Processi Chimico Fisici del...CNR, Area della Ricerca di Pisa, Via G. Moruzzi 1, 56124 Pisa, Italy eDipartimento di Fisica and INFM, Center for Statistical Mechanics and Complexity...prescriptions from equilibrium statistical physics is still applicable to complex dynamical phenomena and which are not. Herein we address the breakdown of

  1. Regression Analysis of Whole Length and Body Weight of Giant Salaman-ders (Andrias davidianus) under the Condition of the Artificial Breeding%人工养殖大鲵全长与体重关系的回归分析

    Institute of Scientific and Technical Information of China (English)

    王启军; 赵虎; 张红星; 吉红

    2012-01-01

    Giant salamanders {Andrias davidianus), which is Amphibia, Caudata, salamander Cryptobranchus Branch, Megalobatrachus, is a unique endemic amphibious in China. Recently, breeding giant salamanders in historical distribution areas is a hot spot, however, the assessment methods of breeding effects are still rare. This essay analyzed the whole length as well as the weight of 1 530 giant salamanders using regression analysis in SPSS analysis software. The results indicated that there was a significant correlative relation between the weight and the whole length of giant salamanders. The formula was Y=0.010X2.867. In conclusion, this article provided a scientific method to assess the status of giant salamander via artificial breeding.%大鲵(Andrias davidianus)隶属两栖纲,有尾目,隐鳃鲵科,大鲵属,为我国特有的珍稀濒危两栖动物.目前在我国一些主要历史分布区掀起了一股大鲵养殖高潮,但是对当前养殖效果缺乏成熟的评价手段,因此本研究通过对陕西省汉中市和安康市两大鲵养殖场养殖的1 530尾大鲵的体重和全长进行实际测量,利用SPSS分析软件对数据进行了回归分析.结果表明,体重与全长之间存在极显著的相关关系,体重与全长之间的关系主要表现为幂函数关系,其关系式为:Y=0.010X2.867.本研究的顺利完成,为今后评价大鲵的人工养殖效果提供了科学的方法.

  2. Poisson brackets of normal-ordered Wilson loops

    Science.gov (United States)

    Lee, C.-W. H.; Rajeev, S. G.

    1999-04-01

    We formulate Yang-Mills theory in terms of the large-N limit, viewed as a classical limit, of gauge-invariant dynamical variables, which are closely related to Wilson loops, via deformation quantization. We obtain a Poisson algebra of these dynamical variables corresponding to normal-ordered quantum (at a finite value of ℏ) operators. Comparing with a Poisson algebra one of us introduced in the past for Weyl-ordered quantum operators, we find, using ideas closely related to topological graph theory, that these two Poisson algebras are, roughly speaking, the same. More precisely speaking, there exists an invertible Poisson morphism between them.

  3. Poisson process Fock space representation, chaos expansion and covariance inequalities

    CERN Document Server

    Last, Guenter

    2009-01-01

    We consider a Poisson process $\\eta$ on an arbitrary measurable space with an arbitrary sigma-finite intensity measure. We establish an explicit Fock space representation of square integrable functions of $\\eta$. As a consequence we identify explicitly, in terms of iterated difference operators, the integrands in the Wiener-Ito chaos expansion. We apply these results to extend well-known variance inequalities for homogeneous Poisson processes on the line to the general Poisson case. The Poincare inequality is a special case. Further applications are covariance identities for Poisson processes on (strictly) ordered spaces and Harris-FKG-inequalities for monotone functions of $\\eta$.

  4. The fractional Poisson process and the inverse stable subordinator

    CERN Document Server

    Meerschaert, Mark M; Vellaisamy, P

    2010-01-01

    The fractional Poisson process is a renewal process with Mittag-Leffler waiting times. Its distributions solve a time-fractional analogue of the Kolmogorov forward equation for a Poisson process. This paper shows that a traditional Poisson process, with the time variable replaced by an independent inverse stable subordinator, is also a fractional Poisson process. This result unifies the two main approaches in the stochastic theory of time-fractional diffusion equations. The equivalence extends to a broad class of renewal processes that include models for tempered fractional diffusion, and distributed-order (e.g., ultraslow) fractional diffusion.

  5. Surface reconstruction through poisson disk sampling.

    Directory of Open Access Journals (Sweden)

    Wenguang Hou

    Full Text Available This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective.

  6. Accuracy analysis of a spectral Poisson solver

    Energy Technology Data Exchange (ETDEWEB)

    Rambaldi, S. [Dipartimento di Fisica Universita di Bologna and INFN, Bologna, Via Irnerio 46, 40126 (Italy)]. E-mail: rambaldi@bo.infn.it; Turchetti, G. [Dipartimento di Fisica Universita di Bologna and INFN, Bologna, Via Irnerio 46, 40126 (Italy); Benedetti, C. [Dipartimento di Fisica Universita di Bologna and INFN, Bologna, Via Irnerio 46, 40126 (Italy); Mattioli, F. [Dipartimento di Fisica Universita di Bologna, Bologna, Via Irnerio 46, 40126 (Italy); Franchi, A. [GSI, Darmstadt, Planckstr. 1, 64291 (Germany)

    2006-06-01

    We solve Poisson's equation in d=2,3 space dimensions by using a spectral method based on Fourier decomposition. The choice of the basis implies that Dirichlet boundary conditions on a box are satisfied. A Green's function-based procedure allows us to impose Dirichlet conditions on any smooth closed boundary, by doubling the computational complexity. The error introduced by the spectral truncation and the discretization of the charge distribution is evaluated by comparison with the exact solution, known in the case of elliptical symmetry. To this end boundary conditions on an equipotential ellipse (ellipsoid) are imposed on the numerical solution. Scaling laws for the error dependence on the number K of Fourier components for each space dimension and the number N of point charges used to simulate the charge distribution are presented and tested. A procedure to increase the accuracy of the method in the beam core region is briefly outlined.

  7. Classical covariant Poisson structures and Deformation Quantization

    CERN Document Server

    Berra-Montiel, Jasel; Palacios-García, César D

    2014-01-01

    Starting with the well-defined product of quantum fields at two spacetime points, we explore an associated Poisson structure for classical field theories within the deformation quantization formalism. We realize that the induced star-product is naturally related to the standard Moyal product through the causal Green functions connecting points in the space of classical solutions to the equations of motion. Our results resemble the Peierls-DeWitt bracket analyzed in the multisymplectic context. Once our star-product is defined we are able to apply the Wigner-Weyl map in order to introduce a generalized version of Wick's theorem. Finally, we include a couple of examples to explicitly test our method: the real scalar field and the bosonic string. For both models we have encountered generalizations of the creation/annihilation relations, and also a generalization of the Virasoro algebra in the bosonic string case.

  8. High order Poisson Solver for unbounded flows

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe

    2015-01-01

    as regularisation we document an increased convergence rate up to tenth order. The method however, can easily be extended well beyond the tenth order. To show the full extend of the method we present the special case of a spectrally ideal regularisation of the velocity formulated integration kernel, which achieves......This paper presents a high order method for solving the unbounded Poisson equation on a regular mesh using a Green’s function solution. The high order convergence was achieved by formulating mollified integration kernels, that were derived from a filter regularisation of the solution field...... or by performing the differentiation as a multiplication of the Fourier coefficients. In this way, differential operators such as the divergence or curl of the solution field could be solved to the same high order convergence without additional computational effort. The method was applied and validated using...

  9. Estimation of (covariance components and genetic parameters for weights of red-winged tinamou using random regression models Estimação de componentes de covariância e de parâmetros genéticos de pesos de perdizes utilizando-se modelos de regressão aleatória

    Directory of Open Access Journals (Sweden)

    P. Tholon

    2011-04-01

    Full Text Available The objective of this work was to determine genetic parameters for body weight of tinamou in captivity. It was used random regression models in analyses of data by considering the direct additive genetic (DA and permanent environmental effects of the animal (PE as random effects. Residual variances were modeled by using a fifth-order variance function. The mean population growth curve was fitted by sixth-order Legendre orthogonal polynomials. Direct additive genetic effects and animal environmental permanent effect were modeled by using Legendre polynomials of order two to nine. The best results were obtained by models with orders of fit of 6 for direct additive genetic effect and of order 3 for permanent effect by Akaike information criterion and of order 3 for both additive genetic effect and permanent effect by Schwarz Bayesian information criterion and likelihood ratio test. Heritability estimates ranged from 0.02 to 0.57. The first eigenvalue explained 94% and 90% of the variation from additive direct and permant environmental effects, respectively. Selection of tinamou for body weight is more effective after 112 days of age.Com este trabalho objetivou-se determinar parâmetros genéticos para peso corporal de perdizes em cativeiro. Foram utilizados modelos de regressão aleatória na análise dos dados considerando os efeitos genéticos aditivos diretos (AD e de ambiente permanente de animal (AP como aleatórios. As variâncias residuais foram modeladas utilizando-se funções de variância de ordem 5. A curva média da população foi ajustada por polinômios ortogonais de Legendre de ordem 6. Os efeitos genéticos aditivos diretos e de ambiente permanente de animal foram modelados utilizando-se polinômios de Legendre de segunda a nona ordem. Os melhores resultados foram obtidos pelos modelos de ordem 6 de ajuste para os efeitos genéticos aditivos diretos e de ordem 3 para os de ambiente permanente pelo Critério de Informação de

  10. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    Science.gov (United States)

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  11. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    Science.gov (United States)

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  12. Study of non-Hodgkin's lymphoma mortality associated with industrial pollution in Spain, using Poisson models

    Directory of Open Access Journals (Sweden)

    Lope Virginia

    2009-01-01

    Full Text Available Abstract Background Non-Hodgkin's lymphomas (NHLs have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model. Results Only proximity of paper industries to population centres (>2 km could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27. Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of

  13. Derivation of relativistic wave equation from the Poisson process

    Indian Academy of Sciences (India)

    Tomoshige Kudo; Ichiro Ohba

    2002-08-01

    A Poisson process is one of the fundamental descriptions for relativistic particles: both fermions and bosons. A generalized linear photon wave equation in dispersive and homogeneous medium with dissipation is derived using the formulation of the Poisson process. This formulation provides a possible interpretation of the passage time of a photon moving in the medium, which never exceeds the speed of light in vacuum.

  14. Canonical derivation of the Vlasov-Coulomb noncanonical Poisson structure

    Energy Technology Data Exchange (ETDEWEB)

    Kaufman, A.N.; Dewar, R.L.

    1983-09-01

    Starting from a Lagrangian formulation of the Vlasov-Coulomb system, canonical methods are used to define a Poisson structure for this system. Successive changes of representation then lead systematically to the noncanonical Lie-Poisson structure for functionals of the Vlasov distribution.

  15. Deformations of log-Lagrangian submanifolds of Poisson manifolds

    OpenAIRE

    2013-01-01

    We consider Lagrangian-like submanifolds in certain even-dimensional 'symplectic-like' Poisson manifolds. We show, under suitable transversality hypotheses, that the pair consisting of the ambient Poisson manifold and the submanifold has unobstructed deformations and that the deformations automatically preserve the Lagrangian-like property.

  16. REGULARITY OF POISSON EQUATION IN SOME LOGARITHMIC SPACE

    Institute of Scientific and Technical Information of China (English)

    Jia Huilian; Li Dongsheng; Wang Lihe

    2007-01-01

    In this note, the regularity of Poisson equation -△u = f with f lying in logarithmic function space Lp(LogL)a(Ω)(1<p <∞, a ∈ R) is studied. The result of the note generalizes the W2,p estimate of Poisson equation in Lp(Ω).

  17. Topology Optimized Architectures with Programmable Poisson's Ratio over Large Deformations

    DEFF Research Database (Denmark)

    Clausen, Anders; Wang, Fengwen; Jensen, Jakob Søndergaard

    2015-01-01

    Topology optimized architectures are designed and printed with programmable Poisson's ratios ranging from -0.8 to 0.8 over large deformations of 20% or more.......Topology optimized architectures are designed and printed with programmable Poisson's ratios ranging from -0.8 to 0.8 over large deformations of 20% or more....

  18. Rate-optimal Bayesian intensity smoothing for inhomogeneous Poisson processes

    NARCIS (Netherlands)

    E. Belitser; P. Serra; H. van Zanten

    2015-01-01

    We apply nonparametric Bayesian methods to study the problem of estimating the intensity function of an inhomogeneous Poisson process. To motivate our results we start by analyzing count data coming from a call center which we model as a Poisson process. This analysis is carried out using a certain

  19. The Survival Probability in Generalized Poisson Risk Model

    Institute of Scientific and Technical Information of China (English)

    GONGRi-zhao

    2003-01-01

    In this paper we generalize the aggregated premium income process from a constant rate process to a poisson process for the classical compound Poinsson risk model,then for the generalized model and the classical compound poisson risk model ,we respectively get its survival probability in finite time period in case of exponential claim amounts.

  20. Poisson-Lie T-Duality and Bianchi Type Algebras

    CERN Document Server

    Jafarizadeh, M A

    1999-01-01

    All Bianchi bialgebras have been obtained. By introducing a non-degenerate adjoint invariant inner product over these bialgebras the associated Drinfeld doubles have been constructed, then by calculating the coupling matrices for these bialgebras several \\sigma-models with Poisson-Lie symmetry have been obtained. Two simple examples as prototypes of Poisson-Lie dual models have been given.

  1. Regression analysis by example

    National Research Council Canada - National Science Library

    Chatterjee, Samprit; Hadi, Ali S

    2012-01-01

    .... The emphasis continues to be on exploratory data analysis rather than statistical theory. The coverage offers in-depth treatment of regression diagnostics, transformation, multicollinearity, logistic regression, and robust regression...

  2. Health literacy and parent attitudes about weight control for children.

    Science.gov (United States)

    Liechty, Janet M; Saltzman, Jaclyn A; Musaad, Salma M

    2015-08-01

    The purpose of this study was to examine associations between parental health literacy and parent attitudes about weight control strategies for young children. Parental low health literacy has been associated with poor child health outcomes, yet little is known about its relationship to child weight control and weight-related health information-seeking preferences. Data were drawn from the STRONG Kids Study, a Midwest panel survey among parents of preschool aged children (n = 497). Parents endorsed an average of 4.3 (SD =2.8) weight loss strategies, 53% endorsed all three recommended weight loss strategies for children, and fewer than 1% of parents endorsed any unsafe strategies. Parents were most likely to seek child weight loss information from healthcare professionals but those with low (vs. adequate) health literacy were significantly less likely to use the Internet or books and more likely to use minister/clergy as sources. Poisson and logistic regressions showed that higher health literacy was associated with endorsement of more strategies overall, more recommended strategies, and greater odds of endorsing each specific recommended strategy for child weight control, after adjusting for parent age, education, race/ethnicity, income, marital status, weight concern, and child BMI percentile. Findings suggest that health literacy impacts parental views about child weight loss strategies and health information-seeking preferences. Pediatric weight loss advice to parents should include assessment of parent attitudes and prior knowledge about child weight control and facilitate parent access to reliable sources of evidence-informed child weight control information.

  3. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    Science.gov (United States)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  4. Poisson Hail on a Hot Ground

    CERN Document Server

    Baccelli, Francois

    2011-01-01

    We consider a queue where the server is the Euclidean space, and the customers are random closed sets (RACS) of the Euclidean space. These RACS arrive according to a Poisson rain and each of them has a random service time (in the case of hail falling on the Euclidean plane, this is the height of the hailstone, whereas the RACS is its footprint). The Euclidean space serves customers at speed 1. The service discipline is a hard exclusion rule: no two intersecting RACS can be served simultaneously and service is in the First In First Out order: only the hailstones in contact with the ground melt at speed 1, whereas the other ones are queued; a tagged RACS waits until all RACS arrived before it and intersecting it have fully melted before starting its own melting. We give the evolution equations for this queue. We prove that it is stable for a sufficiently small arrival intensity, provided the typical diameter of the RACS and the typical service time have finite exponential moments. We also discuss the percolatio...

  5. Causal Poisson bracket via deformation quantization

    Science.gov (United States)

    Berra-Montiel, Jasel; Molgado, Alberto; Palacios-García, César D.

    2016-06-01

    Starting with the well-defined product of quantum fields at two spacetime points, we explore an associated Poisson structure for classical field theories within the deformation quantization formalism. We realize that the induced star-product is naturally related to the standard Moyal product through an appropriate causal Green’s functions connecting points in the space of classical solutions to the equations of motion. Our results resemble the Peierls-DeWitt bracket that has been analyzed in the multisymplectic context. Once our star-product is defined, we are able to apply the Wigner-Weyl map in order to introduce a generalized version of Wick’s theorem. Finally, we include some examples to explicitly test our method: the real scalar field, the bosonic string and a physically motivated nonlinear particle model. For the field theoretic models, we have encountered causal generalizations of the creation/annihilation relations, and also a causal generalization of the Virasoro algebra for the bosonic string. For the nonlinear particle case, we use the approximate solution in terms of the Green’s function, in order to construct a well-behaved causal bracket.

  6. Vlasov-Poisson in 1D: waterbags

    CERN Document Server

    Colombi, Stéphane

    2014-01-01

    We revisit in one dimension the waterbag method to solve numerically Vlasov-Poisson equations. In this approach, the phase-space distribution function $f(x,v)$ is initially sampled by an ensemble of patches, the waterbags, where $f$ is assumed to be constant. As a consequence of Liouville theorem it is only needed to follow the evolution of the border of these waterbags, which can be done by employing an orientated, self-adaptive polygon tracing isocontours of $f$. This method, which is entropy conserving in essence, is very accurate and can trace very well non linear instabilities as illustrated by specific examples. As an application of the method, we generate an ensemble of single waterbag simulations with decreasing thickness, to perform a convergence study to the cold case. Our measurements show that the system relaxes to a steady state where the gravitational potential profile is a power-law of slowly varying index $\\beta$, with $\\beta$ close to $3/2$ as found in the literature. However, detailed analys...

  7. Integer lattice dynamics for Vlasov-Poisson

    Science.gov (United States)

    Mocz, Philip; Succi, Sauro

    2017-03-01

    We revisit the integer lattice (IL) method to numerically solve the Vlasov-Poisson equations, and show that a slight variant of the method is a very easy, viable, and efficient numerical approach to study the dynamics of self-gravitating, collisionless systems. The distribution function lives in a discretized lattice phase-space, and each time-step in the simulation corresponds to a simple permutation of the lattice sites. Hence, the method is Lagrangian, conservative, and fully time-reversible. IL complements other existing methods, such as N-body/particle mesh (computationally efficient, but affected by Monte Carlo sampling noise and two-body relaxation) and finite volume (FV) direct integration schemes (expensive, accurate but diffusive). We also present improvements to the FV scheme, using a moving-mesh approach inspired by IL, to reduce numerical diffusion and the time-step criterion. Being a direct integration scheme like FV, IL is memory limited (memory requirement for a full 3D problem scales as N6, where N is the resolution per linear phase-space dimension). However, we describe a new technique for achieving N4 scaling. The method offers promise for investigating the full 6D phase-space of collisionless systems of stars and dark matter.

  8. Integer Lattice Dynamics for Vlasov-Poisson

    CERN Document Server

    Mocz, Philip

    2016-01-01

    We revisit the integer lattice (IL) method to numerically solve the Vlasov-Poisson equations, and show that a slight variant of the method is a very easy, viable, and efficient numerical approach to study the dynamics of self-gravitating, collisionless systems. The distribution function lives in a discretized lattice phase-space, and each time-step in the simulation corresponds to a simple permutation of the lattice sites. Hence, the method is Lagrangian, conservative, and fully time-reversible. IL complements other existing methods, such as N-body/particle mesh (computationally efficient, but affected by Monte-Carlo sampling noise and two-body relaxation) and finite volume (FV) direct integration schemes (expensive, accurate but diffusive). We also present improvements to the FV scheme, using a moving mesh approach inspired by IL, to reduce numerical diffusion and the time-step criterion. Being a direct integration scheme like FV, IL is memory limited (memory requirement for a full 3D problem scales as N^6, ...

  9. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating e...

  10. Improved estimation in a non-Gaussian parametric regression

    CERN Document Server

    Pchelintsev, Evgeny

    2011-01-01

    The paper considers the problem of estimating the parameters in a continuous time regression model with a non-Gaussian noise of pulse type. The noise is specified by the Ornstein-Uhlenbeck process driven by the mixture of a Brownian motion and a compound Poisson process. Improved estimates for the unknown regression parameters, based on a special modification of the James-Stein procedure with smaller quadratic risk than the usual least squares estimates, are proposed. The developed estimation scheme is applied for the improved parameter estimation in the discrete time regression with the autoregressive noise depending on unknown nuisance parameters.

  11. Poisson structures on affine spaces and flag varieties. I. Matrix affine Poisson space

    OpenAIRE

    K. A. Brown; Goodearl, K. R.; Yakimov, M

    2006-01-01

    The standard Poisson structure on the rectangular matrix variety Mm,n(C) is\\ud investigated, via the orbits of symplectic leaves under the action of the maximal torus T ⊂\\ud GLm+n(C). These orbits, finite in number, are shown to be smooth irreducible locally closed\\ud subvarieties of Mm,n(C), isomorphic to intersections of dual Schubert cells in the full flag\\ud variety of GLm+n(C). Three different presentations of the T-orbits of symplectic leaves in\\ud Mm,n(C) are obtained – (a) as pu...

  12. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  13. How does Poisson kriging compare to the popular BYM model for mapping disease risks?

    Directory of Open Access Journals (Sweden)

    Gebreab Samson

    2008-02-01

    Full Text Available Abstract Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1 it is easier to implement and less CPU intensive, and 2 it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM model and Poisson kriging (point and area-to-area implementations were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1 state of Indiana that consists of 92 counties of fairly similar size and shape, and 2 four states in the Western US (Arizona, California, Nevada and Utah forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models. Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county

  14. How does Poisson kriging compare to the popular BYM model for mapping disease risks?

    Science.gov (United States)

    Goovaerts, Pierre; Gebreab, Samson

    2008-01-01

    Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1) it is easier to implement and less CPU intensive, and 2) it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR) models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM) model and Poisson kriging (point and area-to-area implementations) were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area) has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models). Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county geography becomes more

  15. On classification of discrete, scalar-valued Poisson Brackets

    CERN Document Server

    Parodi, Emanuele

    2011-01-01

    We address the problem of classifying discrete differential-geometric Poisson brackets (dDGPBs) of any fixed order on target space of dimension 1. It is proved that these Poisson brackets (PBs) are in one-to-one correspondence with the intersection points of certain projective hypersurfaces. In addition, they can be reduced to cubic PB of standard Volterra lattice by discrete Miura-type transformations. Finally, improving a consolidation lattice procedure, we obtain new families of non-degenerate, vector-valued and first order dDGPBs, which can be considered in the framework of admissible Lie-Poisson group theory.

  16. Absolute regularity and ergodicity of Poisson count processes

    CERN Document Server

    Neumann, Michael H

    2012-01-01

    We consider a class of observation-driven Poisson count processes where the current value of the accompanying intensity process depends on previous values of both processes. We show under a contractive condition that the bivariate process has a unique stationary distribution and that a stationary version of the count process is absolutely regular. Moreover, since the intensities can be written as measurable functionals of the count variables, we conclude that the bivariate process is ergodic. As an important application of these results, we show how a test method previously used in the case of independent Poisson data can be used in the case of Poisson count processes.

  17. Algebraic structure and Poisson's theory of mechanico-electrical systems

    Institute of Scientific and Technical Information of China (English)

    Liu Hong-Ji; Tang Yi-Fa; Fu Jing-Li

    2006-01-01

    The algebraic structure and Poisson's integral theory of mechanico-electrical systems are studied.The Hamilton canonical equations and generalized Hamilton canonical equations and their the contravariant algebraic forms for mechanico-electrical systems are obtained.The Lie algebraic structure and the Poisson's integral theory of Lagrange mechanico-electrical systems are derived.The Lie algebraic structure admitted and Poisson's integral theory of the Lagrange-Maxwell mechanico-electrical systems are presented.Two examples are presented to illustrate these results.

  18. Intertime jump statistics of state-dependent Poisson processes.

    Science.gov (United States)

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  19. Intertime jump statistics of state-dependent Poisson processes

    Science.gov (United States)

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  20. Comparison and applicability of landslide susceptibility models based on landslide ratio-based logistic regression, frequency ratio, weight of evidence, and instability index methods in an extreme rainfall event

    Science.gov (United States)

    Wu, Chunhung

    2016-04-01

    Few researches have discussed about the applicability of applying the statistical landslide susceptibility (LS) model for extreme rainfall-induced landslide events. The researches focuses on the comparison and applicability of LS models based on four methods, including landslide ratio-based logistic regression (LRBLR), frequency ratio (FR), weight of evidence (WOE), and instability index (II) methods, in an extreme rainfall-induced landslide cases. The landslide inventory in the Chishan river watershed, Southwestern Taiwan, after 2009 Typhoon Morakot is the main materials in this research. The Chishan river watershed is a tributary watershed of Kaoping river watershed, which is a landslide- and erosion-prone watershed with the annual average suspended load of 3.6×107 MT/yr (ranks 11th in the world). Typhoon Morakot struck Southern Taiwan from Aug. 6-10 in 2009 and dumped nearly 2,000 mm of rainfall in the Chishan river watershed. The 24-hour, 48-hour, and 72-hours accumulated rainfall in the Chishan river watershed exceeded the 200-year return period accumulated rainfall. 2,389 landslide polygons in the Chishan river watershed were extracted from SPOT 5 images after 2009 Typhoon Morakot. The total landslide area is around 33.5 km2, equals to the landslide ratio of 4.1%. The main landslide types based on Varnes' (1978) classification are rotational and translational slides. The two characteristics of extreme rainfall-induced landslide event are dense landslide distribution and large occupation of downslope landslide areas owing to headward erosion and bank erosion in the flooding processes. The area of downslope landslide in the Chishan river watershed after 2009 Typhoon Morakot is 3.2 times higher than that of upslope landslide areas. The prediction accuracy of LS models based on LRBLR, FR, WOE, and II methods have been proven over 70%. The model performance and applicability of four models in a landslide-prone watershed with dense distribution of rainfall

  1. Numerical methods for realizing nonstationary Poisson processes with piecewise-constant instantaneous-rate functions

    DEFF Research Database (Denmark)

    Harrod, Steven; Kelton, W. David

    2006-01-01

    Nonstationary Poisson processes are appropriate in many applications, including disease studies, transportation, finance, and social policy. The authors review the risks of ignoring nonstationarity in Poisson processes and demonstrate three algorithms for generation of Poisson processes with piec......Nonstationary Poisson processes are appropriate in many applications, including disease studies, transportation, finance, and social policy. The authors review the risks of ignoring nonstationarity in Poisson processes and demonstrate three algorithms for generation of Poisson processes...

  2. Contravariant Gravity on Poisson Manifolds and Einstein Gravity

    CERN Document Server

    Kaneko, Yukio; Watamura, Satoshi

    2016-01-01

    A relation between a gravity on Poisson manifolds proposed in arXiv:1508.05706 and the Einstein gravity is investigated. The compatibility of the Poisson and Riemann structures defines a unique connection, the contravariant Levi-Civita connection, and leads to the idea of the contravariant gravity. The Einstein-Hilbert-type action includes couplings between the metric and the Poisson tensor. The Weyl transformation is studied to reveal properties of those interactions. It is argued that the theory can have an equivalent description in terms of the Einstein gravity coupled to matter. As an example, it is shown that the contravariant gravity on a two-dimensional Poisson manifold has another description by a real scalar field coupling to the metric in a specific manner.

  3. Doubly stochastic Poisson processes in artificial neural learning.

    Science.gov (United States)

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  4. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...

  5. Modeling laser velocimeter signals as triply stochastic Poisson processes

    Science.gov (United States)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  6. Evolution of fermionic systems as an expectation over Poisson processes

    CERN Document Server

    Beccaria, M; De Angelis, G F; Lasinio, G J; Beccaria, Matteo; Presilla, Carlo; Angelis, Gian Fabrizio De; Lasinio, Giovanni Jona

    1999-01-01

    We derive an exact probabilistic representation for the evolution of a Hubbard model with site- and spin-dependent hopping coefficients and site-dependent interactions in terms of an associated stochastic dynamics of a collection of Poisson processes.

  7. Evolution of Fermionic Systems as AN Expectation Over Poisson Processes

    Science.gov (United States)

    Beccaria, M.; Presilla, C.; de Angelis, G. F.; Jona-Lasinio, G.

    We derive an exact probabilistic representation for the evolution of a Hubbard model with site- and spin-dependent hopping coefficients and site-dependent interactions in terms of an associated stochastic dynamics of a collection of Poisson processes.

  8. 2D Poisson sigma models with gauged vectorial supersymmetry

    Science.gov (United States)

    Bonezzi, Roberto; Sundell, Per; Torres-Gomez, Alexander

    2015-08-01

    In this note, we gauge the rigid vectorial supersymmetry of the two-dimensional Poisson sigma model presented in arXiv:1503.05625. We show that the consistency of the construction does not impose any further constraints on the differential Poisson algebra geometry than those required for the ungauged model. We conclude by proposing that the gauged model provides a first-quantized framework for higher spin gravity.

  9. Algebraic structure and Poisson method for a weakly nonholonomic system

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The algebraic structure and the Poisson method for a weakly nonholonomic system are studied.The differential equations of motion of the system can be written in a contravariant algebra form and its algebraic structure is discussed.The Poisson theory for the systems which possess Lie algebra structure is generalized to the weakly nonholonomic system.An example is given to illustrate the application of the result.

  10. 2D Poisson sigma models with gauged vectorial supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Bonezzi, Roberto [Dipartimento di Fisica ed Astronomia, Università di Bologna and INFN, Sezione di Bologna,via Irnerio 46, I-40126 Bologna (Italy); Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Sundell, Per [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Torres-Gomez, Alexander [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Instituto de Ciencias Físicas y Matemáticas, Universidad Austral de Chile-UACh,Valdivia (Chile)

    2015-08-12

    In this note, we gauge the rigid vectorial supersymmetry of the two-dimensional Poisson sigma model presented in arXiv:1503.05625. We show that the consistency of the construction does not impose any further constraints on the differential Poisson algebra geometry than those required for the ungauged model. We conclude by proposing that the gauged model provides a first-quantized framework for higher spin gravity.

  11. Spatial Brownian motion in renormalized Poisson potential: A critical case

    CERN Document Server

    Chen, Xia

    2011-01-01

    Let $B_s$ be a three dimensional Brownian motion and $\\omega(dx)$ be an independent Poisson field on $\\mathbb{R}^3$. It is proved that for any $t>0$, conditionally on $\\omega(\\cdot)$, \\label{*} \\mathbb{E}_0 \\exp\\{\\theta \\int_0^t \\bar{V}(B_s) ds\\} \\ 1/16, where $\\bar{V}(x)$ is the renormalized Poisson potential

  12. 2D Poisson sigma models with gauged vectorial supersymmetry

    OpenAIRE

    2015-01-01

    In this note, we gauge the rigid vectorial supersymmetry of the two-dimensional Poisson sigma model presented in arXiv:1503.05625. We show that the consistency of the construction does not impose any further constraints on the differential Poisson algebra geometry than those required for the ungauged model. We conclude by proposing that the gauged model provides a first-quantized framework for higher spin gravity.

  13. 2D Poisson Sigma Models with Gauged Vectorial Supersymmetry

    CERN Document Server

    Bonezzi, Roberto; Torres-Gomez, Alexander

    2015-01-01

    In this note, we gauge the rigid vectorial supersymmetry of the two-dimensional Poisson sigma model presented in arXiv:1503.05625. We show that the consistency of the construction does not impose any further constraints on the differential Poisson algebra geometry than those required for the ungauged model. We conclude by proposing that the gauged model provides a first-quantized framework for higher spin gravity.

  14. On the One Dimensional Poisson Random Geometric Graph

    Directory of Open Access Journals (Sweden)

    L. Decreusefond

    2011-01-01

    Full Text Available Given a Poisson process on a bounded interval, its random geometric graph is the graph whose vertices are the points of the Poisson process, and edges exist between two points if and only if their distance is less than a fixed given threshold. We compute explicitly the distribution of the number of connected components of this graph. The proof relies on inverting some Laplace transforms.

  15. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  16. XRA image segmentation using regression

    Science.gov (United States)

    Jin, Jesse S.

    1996-04-01

    Segmentation is an important step in image analysis. Thresholding is one of the most important approaches. There are several difficulties in segmentation, such as automatic selecting threshold, dealing with intensity distortion and noise removal. We have developed an adaptive segmentation scheme by applying the Central Limit Theorem in regression. A Gaussian regression is used to separate the distribution of background from foreground in a single peak histogram. The separation will help to automatically determine the threshold. A small 3 by 3 widow is applied and the modal of the local histogram is used to overcome noise. Thresholding is based on local weighting, where regression is used again for parameter estimation. A connectivity test is applied to the final results to remove impulse noise. We have applied the algorithm to x-ray angiogram images to extract brain arteries. The algorithm works well for single peak distribution where there is no valley in the histogram. The regression provides a method to apply knowledge in clustering. Extending regression for multiple-level segmentation needs further investigation.

  17. Unitary Response Regression Models

    Science.gov (United States)

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  18. Flexible survival regression modelling

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben

    2009-01-01

    Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...

  19. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights by m...... treatment of the topic is based on the perspective of applied researchers using quantile regression in their empirical work....

  20. Applications of some discrete regression models for count data

    Directory of Open Access Journals (Sweden)

    B. M. Golam Kibria

    2006-01-01

    Full Text Available In this paper we have considered several regression models to fit the count data that encounter in the field of Biometrical, Environmental, Social Sciences and Transportation Engineering. We have fitted Poisson (PO, Negative Binomial (NB, Zero-Inflated Poisson (ZIP and Zero-Inflated Negative Binomial (ZINB regression models to run-off-road (ROR crash data which collected on arterial roads in south region (rural of Florida State. To compare the performance of these models, we analyzed data with moderate to high percentage of zero counts. Because the variances were almost three times greater than the means, it appeared that both NB and ZINB models performed better than PO and ZIP models for the zero inflated and over dispersed count data.

  1. Regression for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Regression analysis is the most commonly used statistical method in the world. Although few would characterize this technique as simple, regression is in fact both simple and elegant. The complexity that many attribute to regression analysis is often a reflection of their lack of familiarity with the language of mathematics. But regression analysis can be understood even without a mastery of sophisticated mathematical concepts. This book provides the foundation and will help demystify regression analysis using examples from economics and with real data to show the applications of the method. T

  2. Outlier detection algorithms for least squares time series regression

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Bent

    We review recent asymptotic results on some robust methods for multiple regression. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator...... theory involves normal distribution results and Poisson distribution results. The theory is applied to a time series data set....

  3. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  4. Completely Integrable Hamiltonian Systems Generated by Poisson Structures in R3

    Institute of Scientific and Technical Information of China (English)

    LEI De-Chao; ZHANG Xiang

    2005-01-01

    @@ The completely integrable Hamiltonian systems have been applied to physics and mechanics intensively. We generate a family of completely integrable Hamiltonian systems from some kinds of exact Poisson structures in R3 by the realization of the Poisson algebra. Moreover, we prove that there is a Poisson algebra which cannot be realized by an exact Poisson structure.

  5. Autistic epileptiform regression.

    Science.gov (United States)

    Canitano, Roberto; Zappella, Michele

    2006-01-01

    Autistic regression is a well known condition that occurs in one third of children with pervasive developmental disorders, who, after normal development in the first year of life, undergo a global regression during the second year that encompasses language, social skills and play. In a portion of these subjects, epileptiform abnormalities are present with or without seizures, resembling, in some respects, other epileptiform regressions of language and behaviour such as Landau-Kleffner syndrome. In these cases, for a more accurate definition of the clinical entity, the term autistic epileptifom regression has been suggested. As in other epileptic syndromes with regression, the relationships between EEG abnormalities, language and behaviour, in autism, are still unclear. We describe two cases of autistic epileptiform regression selected from a larger group of children with autistic spectrum disorders, with the aim of discussing the clinical features of the condition, the therapeutic approach and the outcome.

  6. Scaled Sparse Linear Regression

    CERN Document Server

    Sun, Tingni

    2011-01-01

    Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual squares and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs nearly nothing beyond the computation of a path of the sparse regression estimator for penalty levels above a threshold. For the scaled Lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the method yields simultaneously an estimator for the noise level and an estimated coefficient vector in the Lasso path satisfying certain oracle inequalities for the estimation of the noise level, prediction, and the estimation of regression coefficients. These oracle inequalities provide sufficient conditions for the consistency and asymptotic...

  7. Rolling Regressions with Stata

    OpenAIRE

    Kit Baum

    2004-01-01

    This talk will describe some work underway to add a "rolling regression" capability to Stata's suite of time series features. Although commands such as "statsby" permit analysis of non-overlapping subsamples in the time domain, they are not suited to the analysis of overlapping (e.g. "moving window") samples. Both moving-window and widening-window techniques are often used to judge the stability of time series regression relationships. We will present an implementation of a rolling regression...

  8. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

    Science.gov (United States)

    Cho, Eun-Hee; Oh, Sei-Qwon

    2016-07-01

    We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

  9. Unbiased Quasi-regression

    Institute of Scientific and Technical Information of China (English)

    Guijun YANG; Lu LIN; Runchu ZHANG

    2007-01-01

    Quasi-regression, motivated by the problems arising in the computer experiments, focuses mainly on speeding up evaluation. However, its theoretical properties are unexplored systemically. This paper shows that quasi-regression is unbiased, strong convergent and asymptotic normal for parameter estimations but it is biased for the fitting of curve. Furthermore, a new method called unbiased quasi-regression is proposed. In addition to retaining the above asymptotic behaviors of parameter estimations, unbiased quasi-regression is unbiased for the fitting of curve.

  10. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  11. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2005-01-01

    Master linear regression techniques with a new edition of a classic text Reviews of the Second Edition: ""I found it enjoyable reading and so full of interesting material that even the well-informed reader will probably find something new . . . a necessity for all of those who do linear regression."" -Technometrics, February 1987 ""Overall, I feel that the book is a valuable addition to the now considerable list of texts on applied linear regression. It should be a strong contender as the leading text for a first serious course in regression analysis."" -American Scientist, May-June 1987

  12. A critical assessment of shrinkage-based regression approaches for estimating the adverse health effects of multiple air pollutants

    Science.gov (United States)

    Roberts, Steven; Martin, Michael

    Most investigations of the adverse health effects of multiple air pollutants analyse the time series involved by simultaneously entering the multiple pollutants into a Poisson log-linear model. Concerns have been raised about this type of analysis, and it has been stated that new methodology or models should be developed for investigating the adverse health effects of multiple air pollutants. In this paper, we introduce the use of the lasso for this purpose and compare its statistical properties to those of ridge regression and the Poisson log-linear model. Ridge regression has been used in time series analyses on the adverse health effects of multiple air pollutants but its properties for this purpose have not been investigated. A series of simulation studies was used to compare the performance of the lasso, ridge regression, and the Poisson log-linear model. In these simulations, realistic mortality time series were generated with known air pollution mortality effects permitting the performance of the three models to be compared. Both the lasso and ridge regression produced more accurate estimates of the adverse health effects of the multiple air pollutants than those produced using the Poisson log-linear model. This increase in accuracy came at the expense of increased bias. Ridge regression produced more accurate estimates than the lasso, but the lasso produced more interpretable models. The lasso and ridge regression offer a flexible way of obtaining more accurate estimation of pollutant effects than that provided by the standard Poisson log-linear model.

  13. Electrostatic forces in the Poisson-Boltzmann systems.

    Science.gov (United States)

    Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2013-09-07

    Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models.

  14. Mutation-Periodic Quivers, Integrable Maps and Associated Poisson Algebras

    CERN Document Server

    Fordy, Allan P

    2010-01-01

    We consider a class of map, recently derived in the context of cluster mutation. In this paper we start with a brief review of the quiver context, but then move onto a discussion of a related Poisson bracket, along with the Poisson algebra of a special family of functions associated with these maps. A bi-Hamiltonian structure is derived and used to construct a sequence of Poisson commuting functions and hence show complete integrability. Canonical coordinates are derived, with the map now being a canonical transformation with a sequence of commuting invariant functions. Compatibility of a pair of these functions gives rise to Liouville's equation and the map plays the role of a B\\"acklund transformation.

  15. The coupling of Poisson sigma models to topological backgrounds

    CERN Document Server

    Rosa, Dario

    2016-01-01

    We extend the coupling to the topological backgrounds, recently worked out for the 2-dimensional BF-model, to the most general Poisson sigma models. The coupling involves the choice of a Casimir function on the target manifold and modifies the BRST transformations. This in turn induces a change in the BRST cohomology of the resulting theory. The observables of the coupled theory are analyzed and their geometrical intrepretation is given. We finally couple the theory to 2-dimensional topological gravity: this is the first step to study a topological string theory in propagation on a Poisson manifold. As an application, we show that the gauge-fixed vectorial supersymmetry of the Poisson sigma models has a natural explanation in terms of the theory coupled to topological gravity.

  16. POISSON LIMIT THEOREM FOR COUNTABLE MARKOV CHAINS IN MARKOVIAN ENVIRONMENTS

    Institute of Scientific and Technical Information of China (English)

    方大凡; 王汉兴; 唐矛宁

    2003-01-01

    A countable Markov chain in a Markovian environment is considered. A Poisson limit theorem for the chain recurring to small cylindrical sets is mainly achieved. In order to prove this theorem, the entropy function h is introduced and the Shannon-McMillan-Breiman theorem for the Markov chain in a Markovian environment is shown. It' s well-known that a Markov process in a Markovian environment is generally not a standard Markov chain, so an example of Poisson approximation for a process which is not a Markov process is given. On the other hand, when the environmental process degenerates to a constant sequence, a Poisson limit theorem for countable Markov chains, which is the generalization of Pitskel's result for finite Markov chains is obtained.

  17. The coupling of Poisson sigma models to topological backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Dario [School of Physics, Korea Institute for Advanced Study,Seoul 02455 (Korea, Republic of)

    2016-12-13

    We extend the coupling to the topological backgrounds, recently worked out for the 2-dimensional BF-model, to the most general Poisson sigma models. The coupling involves the choice of a Casimir function on the target manifold and modifies the BRST transformations. This in turn induces a change in the BRST cohomology of the resulting theory. The observables of the coupled theory are analyzed and their geometrical interpretation is given. We finally couple the theory to 2-dimensional topological gravity: this is the first step to study a topological string theory in propagation on a Poisson manifold. As an application, we show that the gauge-fixed vectorial supersymmetry of the Poisson sigma models has a natural explanation in terms of the theory coupled to topological gravity.

  18. Segmentation algorithm for non-stationary compound Poisson processes

    CERN Document Server

    Toth, Bence; Farmer, J Doyne

    2010-01-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of the time series. The process is composed of consecutive patches of variable length, each patch being described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated to a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galvan, et al., Phys. Rev. Lett., 87, 168105 (2001). We show that the new algorithm outperforms the original one for regime switching compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  19. Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.

    Science.gov (United States)

    Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N

    2016-08-10

    We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms.

  20. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  1. Super-Affine Hierarchies and their Poisson Embeddings

    CERN Document Server

    Toppan, F

    1998-01-01

    The link between (super)-affine Lie algebras as Poisson brackets structures and integrable hierarchies provides both a classification and a tool for obtaining superintegrable hierarchies. The lack of a fully systematic procedure for constructing matrix-type Lax operators, which makes the supersymmetric case essentially different from the bosonic counterpart, is overcome via the notion of Poisson embeddings (P.E.), i.e. Poisson mappings relating affine structures to conformal structures (in their simplest version P.E. coincide with the Sugawara construction). A full class of hierarchies can be recovered by using uniquely Lie-algebraic notions. The group-algebraic properties implicit in the super-affine picture allow a systematic derivation of reduced hierarchies by imposing either coset conditions or hamiltonian constraints (or possibly both).

  2. Poisson Packet Traffic Generation Based on Empirical Data

    Directory of Open Access Journals (Sweden)

    Andrej Kos

    2003-10-01

    Full Text Available An algorithm for generating equivalent Poisson packet traffic based on empirical traffic data is presented in this paper. Two steps are required in order to produce equivalent Poisson packet traffic. Real traffic trace is analyzed in the first step. In the second step, a new equivalent synthetic Poisson traffic is generated in such a way that the first order statistical parameters remain unchanged. New packet inter-arrival time series are produced in a random manner using negative exponential probability distribution with a known mean. New packet size series are also produced in a random manner. However, due to specified minimum and maximum packet sizes, a truncated exponential probability distribution is applied.

  3. Effect of Poisson noise on adiabatic quantum control

    Science.gov (United States)

    Kiely, A.; Muga, J. G.; Ruschhaupt, A.

    2017-01-01

    We present a detailed derivation of the master equation describing a general time-dependent quantum system with classical Poisson white noise and outline its various properties. We discuss the limiting cases of Poisson white noise and provide approximations for the different noise strength regimes. We show that using the eigenstates of the noise superoperator as a basis can be a useful way of expressing the master equation. Using this, we simulate various settings to illustrate different effects of Poisson noise. In particular, we show a dip in the fidelity as a function of noise strength where high fidelity can occur in the strong-noise regime for some cases. We also investigate recent claims [J. Jing et al., Phys. Rev. A 89, 032110 (2014), 10.1103/PhysRevA.89.032110] that this type of noise may improve rather than destroy adiabaticity.

  4. Suppressing Background Radiation Using Poisson Principal Component Analysis

    CERN Document Server

    Tandon, P; Dubrawski, A; Labov, S; Nelson, K

    2016-01-01

    Performance of nuclear threat detection systems based on gamma-ray spectrometry often strongly depends on the ability to identify the part of measured signal that can be attributed to background radiation. We have successfully applied a method based on Principal Component Analysis (PCA) to obtain a compact null-space model of background spectra using PCA projection residuals to derive a source detection score. We have shown the method's utility in a threat detection system using mobile spectrometers in urban scenes (Tandon et al 2012). While it is commonly assumed that measured photon counts follow a Poisson process, standard PCA makes a Gaussian assumption about the data distribution, which may be a poor approximation when photon counts are low. This paper studies whether and in what conditions PCA with a Poisson-based loss function (Poisson PCA) can outperform standard Gaussian PCA in modeling background radiation to enable more sensitive and specific nuclear threat detection.

  5. The coupling of Poisson sigma models to topological backgrounds

    Science.gov (United States)

    Rosa, Dario

    2016-12-01

    We extend the coupling to the topological backgrounds, recently worked out for the 2-dimensional BF-model, to the most general Poisson sigma models. The coupling involves the choice of a Casimir function on the target manifold and modifies the BRST transformations. This in turn induces a change in the BRST cohomology of the resulting theory. The observables of the coupled theory are analyzed and their geometrical interpretation is given. We finally couple the theory to 2-dimensional topological gravity: this is the first step to study a topological string theory in propagation on a Poisson manifold. As an application, we show that the gauge-fixed vectorial supersymmetry of the Poisson sigma models has a natural explanation in terms of the theory coupled to topological gravity.

  6. Morse–Smale Regression

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Samuel [Univ. of Utah, Salt Lake City, UT (United States); Rubel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bremer, Peer -Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Whitaker, Ross T. [Univ. of Utah, Salt Lake City, UT (United States)

    2012-01-19

    This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.

  7. Regression to Causality

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    Humans are fundamentally primed for making causal attributions based on correlations. This implies that researchers must be careful to present their results in a manner that inhibits unwarranted causal attribution. In this paper, we present the results of an experiment that suggests regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...

  8. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  9. Poisson-Fermi Model of Single Ion Activities

    CERN Document Server

    Liu, Jinn-Liang

    2015-01-01

    A Poisson-Fermi model is proposed for calculating activity coefficients of single ions in strong electrolyte solutions based on the experimental Born radii and hydration shells of ions in aqueous solutions. The steric effect of water molecules and interstitial voids in the first and second hydration shells play an important role in our model. The screening and polarization effects of water are also included in the model that can thus describe spatial variations of dielectric permittivity, water density, void volume, and ionic concentration. The activity coefficients obtained by the Poisson-Fermi model with only one adjustable parameter are shown to agree with experimental data, which vary nonmonotonically with salt concentrations.

  10. Adaptive maximal poisson-disk sampling on surfaces

    KAUST Repository

    Yan, Dongming

    2012-01-01

    In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.

  11. The energy-momentum of a Poisson structure

    Energy Technology Data Exchange (ETDEWEB)

    Buric, M. [University of Belgrade, Faculty of Physics, P.O. Box 368, Belgrade (RS); Madore, J. [Universite de Paris-Sud, Laboratoire de Physique Theorique, Orsay (France); Zoupanos, G. [National Technical University, Physics Department, Zografou, Athens (Greece)

    2008-06-15

    Consider the quasi-commutative approximation to a noncommutative geometry. It is shown that there is a natural map from the resulting Poisson structure to the Riemann curvature of a metric. This map is applied to the study of high-frequency gravitational radiation. In classical gravity in the WKB approximation there are two results of interest, a dispersion relation and a conservation law. Both of these results can be extended to the noncommutative case, with the difference that they result from a cocycle condition on the high-frequency contribution to the Poisson structure, not from the field equations. (orig.)

  12. Improvement on MLE of the Means of Independent Poisson Distribution

    Institute of Scientific and Technical Information of China (English)

    李排昌

    2000-01-01

    In this paper, we consider the simultaneous estimation of the parameters (means) of the independent Poisson distribution by using the following loss functions: L0(θ,T)=∑i=1n(Ti-θi)2,L1(θ,T)=∑i=1n(Ti-θi)2/θi We develop an estimator which is better than the maximum likelihood estimator X simultaneously under L0(θ, T) and L1(θ, T). Our estimator possesses substantially smaller risk than the usual estimator X to estimate the parameters (means) of the independent Poisson distribution.

  13. Poisson Brackets of Normal-Ordered Wilson Loops

    OpenAIRE

    Lee, C. -W. H.; Rajeev, S. G.

    1998-01-01

    We formulate Yang-Mills theory in terms of the large-N limit, viewed as a classical limit, of gauge-invariant dynamical variables, which are closely related to Wilson loops, via deformation quantization. We obtain a Poisson algebra of these dynamical variables corresponding to normal-ordered quantum (at a finite value of $\\hbar$) operators. Comparing with a Poisson algebra one of us introduced in the past for Weyl-ordered quantum operators, we find, using ideas closly related to topological g...

  14. A high order solver for the unbounded Poisson equation

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe

    and the integration kernel. In this work we show an implementation of high order regularised integration kernels in the HE algorithm for the unbounded Poisson equation to formally achieve an arbitrary high order convergence. We further present a quantitative study of the convergence rate to give further insight......In mesh-free particle methods a high order solution to the unbounded Poisson equation is usually achieved by constructing regularised integration kernels for the Biot-Savart law. Here the singular, point particles are regularised using smoothed particles to obtain an accurate solution with an order...

  15. ? filtering for stochastic systems driven by Poisson processes

    Science.gov (United States)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  16. Robust iterative observer for source localization for Poisson equation

    KAUST Repository

    Majeed, Muhammad Usman

    2017-01-05

    Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.

  17. Conditioned Poisson distributions and the concentration of chromatic numbers

    CERN Document Server

    Hartigan, John; Tatikonda, Sekhar

    2011-01-01

    The paper provides a simpler method for proving a delicate inequality that was used by Achlioptis and Naor to establish asymptotic concentration for chromatic numbers of Erdos-Renyi random graphs. The simplifications come from two new ideas. The first involves a sharpened form of a piece of statistical folklore regarding goodness-of-fit tests for two-way tables of Poisson counts under linear conditioning constraints. The second idea takes the form of a new inequality that controls the extreme tails of the distribution of a quadratic form in independent Poissons random variables.

  18. Efficient maximal Poisson-disk sampling and remeshing on surfaces

    KAUST Repository

    Guo, Jianwei

    2015-02-01

    Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.

  19. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  20. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    Science.gov (United States)

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  1. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  2. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  3. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  4. Transductive Ordinal Regression

    CERN Document Server

    Seah, Chun-Wei; Ong, Yew-Soon

    2011-01-01

    Ordinal regression is commonly formulated as a multi-class problem with ordinal constraints. The challenge of designing accurate classifiers for ordinal regression generally increases with the number of classes involved, due to the large number of labeled patterns that are needed. The availability of ordinal class labels, however, are often costly to calibrate or difficult to obtain. Unlabeled patterns, on the other hand, often exist in much greater abundance and are freely available. To take benefits from the abundance of unlabeled patterns, we present a novel transductive learning paradigm for ordinal regression in this paper, namely Transductive Ordinal Regression (TOR). The key challenge of the present study lies in the precise estimation of both the ordinal class label of the unlabeled data and the decision functions of the ordinal classes, simultaneously. The core elements of the proposed TOR include an objective function that caters to several commonly used loss functions casted in transductive setting...

  5. Instability theory of the Navier-Stokes-Poisson equations

    CERN Document Server

    Jang, Juhi

    2011-01-01

    The stability question of the Lane-Emden stationary gaseous star configurations is an interesting problem arising in astrophysics. We establish both linear and nonlinear dynamical instability results for the Lane-Emden solutions in the framework of the Navier-Stokes-Poisson system with adiabatic exponent $6/5 < \\gamma < 4/3$.

  6. Large time behavior of Euler-Poisson system for semiconductor

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this note,we present a framework for the large time behavior of general uniformly bounded weak entropy solutions to the Cauchy problem of Euler-Poisson system of semiconductor devices.It is shown that the solutions converges to the stationary solutions exponentially in time.No smallness and regularity conditions are assumed.

  7. Tailoring graphene to achieve negative Poisson's ratio properties.

    Science.gov (United States)

    Grima, Joseph N; Winczewski, Szymon; Mizzi, Luke; Grech, Michael C; Cauchi, Reuben; Gatt, Ruben; Attard, Daphne; Wojciechowski, Krzysztof W; Rybicki, Jarosław

    2015-02-25

    Graphene can be made auxetic through the introduction of vacancy defects. This results in the thinnest negative Poisson's ratio material at ambient conditions known so far, an effect achieved via a nanoscale de-wrinkling mechanism that mimics the behavior at the macroscale exhibited by a crumpled sheet of paper when stretched.

  8. Stability of Schr(o)dinger-Poisson type equations

    Institute of Scientific and Technical Information of China (English)

    Juan HUANG; Jian ZHANG; Guang-gan CHEN

    2009-01-01

    Variational methods are used to study the nonlinear Schr(o)dinger-Poisson type equations which model the electromagnetic wave propagating in the plasma in physics. By analyzing the Halniltonian property to construct a constrained variational problem, the existence of the ground state of the system is obtained. Furthermore, it is shown that the ground state is orbitally stable.

  9. On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Klimčík, Ctirad [Aix Marseille Université, CNRS, Centrale Marseille I2M, UMR 7373, 13453 Marseille (France)

    2015-12-15

    We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.

  10. Quantum Poisson-Lie T-duality and WZNW model

    CERN Document Server

    Alekseev, A Yu; Tseytlin, Arkady A

    1996-01-01

    A pair of conformal sigma models related by Poisson-Lie T-duality is constructed by starting with the O(2,2) Drinfeld double. The duality relates the standard SL(2,R) WZNW model to a constrained sigma model defined on SL(2,R) group space. The quantum equivalence of the models is established by using a path integral argument.

  11. Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)

    DEFF Research Database (Denmark)

    Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis

    We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...

  12. A Poisson type formula for Hardy classes on Heisenberg's group

    Directory of Open Access Journals (Sweden)

    Lopushansky O.V.

    2010-06-01

    Full Text Available The Hardy type class of complex functions with infinite many variables defined on the Schrodinger irreducible unitary orbit of reduced Heisenberg group, generated by the Gauss density, is investigated. A Poisson integral type formula for their analytic extensions on an open ball is established. Taylor coefficients for analytic extensions are described by the associatedsymmetric Fock space.

  13. Canonical enhancement as a result of Poisson distribution

    CERN Document Server

    Biro, T S

    2002-01-01

    We point out that a certain finite size effect in heavy ion physics, the canonical enhancement, is based on the difference of conservation constrained pair statistics for Poisson and Gauss distributions, respectively. Consequently it should occur in a wide range of phenomena, whenever comparing rare and frequent events.

  14. Optimality of Poisson Processes Intensity Learning with Gaussian Processes

    NARCIS (Netherlands)

    Kirichenko, A.; van Zanten, H.

    2015-01-01

    In this paper we provide theoretical support for the so-called "Sigmoidal Gaussian Cox Process" approach to learning the intensity of an inhomogeneous Poisson process on a d-dimensional domain. This method was proposed by Adams, Murray and MacKay (ICML, 2009), who developed a tractable computational

  15. On global solutions for the Vlasov-Poisson system

    Directory of Open Access Journals (Sweden)

    Peter E. Zhidkov

    2004-04-01

    Full Text Available In this article we show that the Vlasov-Poisson system has a unique weak solution in the space $L_1cap L_infty$. For this purpose, we use the method of characteristics, unlike the approach in [12].

  16. Optimality of Poisson Processes Intensity Learning with Gaussian Processes

    NARCIS (Netherlands)

    A. Kirichenko; H. van Zanten

    2015-01-01

    In this paper we provide theoretical support for the so-called "Sigmoidal Gaussian Cox Process" approach to learning the intensity of an inhomogeneous Poisson process on a d-dimensional domain. This method was proposed by Adams, Murray and MacKay (ICML, 2009), who developed a tractable computational

  17. Nonparametric Bayesian inference for multidimensional compound Poisson processes

    NARCIS (Netherlands)

    S. Gugushvili; F. van der Meulen; P. Spreij

    2015-01-01

    Given a sample from a discretely observed multidimensional compound Poisson process, we study the problem of nonparametric estimation of its jump size density r0 and intensity λ0. We take a nonparametric Bayesian approach to the problem and determine posterior contraction rates in this context, whic

  18. Using the Gamma-Poisson Model to Predict Library Circulations.

    Science.gov (United States)

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  19. Poisson processes on groups and Feynamn path integrals

    Energy Technology Data Exchange (ETDEWEB)

    Combe, P.; Rodriguez, R.; Sirugue, M.; Sirugue-Collin, M.; Hoegh-Krohn, R.

    1980-10-01

    We give an expression for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems.

  20. Poisson processes on groups and Feynman path integrals

    Science.gov (United States)

    Combe, Ph.; Høegh-Krohn, R.; Rodriguez, R.; Sirugue, M.; Sirugue-Collin, M.

    1980-10-01

    We give an expression for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems.

  1. Wide-area traffic: The failure of Poisson modeling

    Energy Technology Data Exchange (ETDEWEB)

    Paxson, V.; Floyd, S.

    1994-08-01

    Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.

  2. Characterization and global analysis of a family of Poisson structures

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Bermejo, Benito [Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Universidad Rey Juan Carlos, Calle Tulipan S/N, 28933 (Mostoles), Madrid (Spain)]. E-mail: benito.hernandez@urjc.es

    2006-06-26

    A three-dimensional family of solutions of the Jacobi equations for Poisson systems is characterized. In spite of its general form it is possible the explicit and global determination of its main features, such as the symplectic structure and the construction of the Darboux canonical form. Examples are given.

  3. Wavelet-based Poisson rate estimation using the Skellam distribution

    Science.gov (United States)

    Hirakawa, Keigo; Baqai, Farhan; Wolfe, Patrick J.

    2009-02-01

    Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements often exhibit heteroscedastic behavior. In particular, time series components and other measurements may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation method performed in certain wavelet/filterbank transform domains. This indicates a promising potential approach for denoising of Poisson counts in the above-mentioned applications.

  4. Nonparametric Predictive Regression

    OpenAIRE

    Ioannis Kasparis; Elena Andreou; Phillips, Peter C.B.

    2012-01-01

    A unifying framework for inference is developed in predictive regressions where the predictor has unknown integration properties and may be stationary or nonstationary. Two easily implemented nonparametric F-tests are proposed. The test statistics are related to those of Kasparis and Phillips (2012) and are obtained by kernel regression. The limit distribution of these predictive tests holds for a wide range of predictors including stationary as well as non-stationary fractional and near unit...

  5. MortalitySmooth: An R Package for Smoothing Poisson Counts with P-Splines

    Directory of Open Access Journals (Sweden)

    Carlo G. Camarda

    2012-07-01

    Full Text Available The MortalitySmooth package provides a framework for smoothing count data in both one- and two-dimensional settings. Although general in its purposes, the package is specifically tailored to demographers, actuaries, epidemiologists, and geneticists who may be interested in using a practical tool for smoothing mortality data over ages and/or years. The total number of deaths over a specified age- and year-interval is assumed to be Poisson-distributed, and P-splines and generalized linear array models are employed as a suitable regression methodology. Extra-Poisson variation can also be accommodated.Structured in an S3 object orientation system, MortalitySmooth has two main functions which t the data and dene two classes of objects:Mort1Dsmooth and Mort2Dsmooth. The methods for these classes (print, summary, plot, predict, and residuals are also included. These features make it easy for users to extract and manipulate the outputs.In addition, a collection of mortality data is provided. This paper provides an overview of the design, aims, and principles of MortalitySmooth, as well as strategies for applying it and extending its use.

  6. Weight Management

    Science.gov (United States)

    ... Anger Weight Management Weight Management Smoking and Weight Healthy Weight Loss Being Comfortable in Your Own Skin Your Weight Loss Expectations & Goals Healthier Lifestyle Healthier Lifestyle Physical Fitness Food & Nutrition Sleep, Stress & Relaxation Emotions & Relationships HealthyYouTXT ...

  7. Hypotheses testing for fuzzy robust regression parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kula, Kamile Sanli [Ahi Evran University, Department of Mathematics, 40200 Kirsehir (Turkey)], E-mail: sanli2004@hotmail.com; Apaydin, Aysen [Ankara University, Department of Statistics, 06100 Ankara (Turkey)], E-mail: apaydin@science.ankara.edu.tr

    2009-11-30

    The classical least squares (LS) method is widely used in regression analysis because computing its estimate is easy and traditional. However, LS estimators are very sensitive to outliers and to other deviations from basic assumptions of normal theory [Huynh H. A comparison of four approaches to robust regression. Psychol Bull 1982;92:505-12; Stephenson D. 2000. Available from: (http://folk.uib.no/ngbnk/kurs/notes/node38.html); Xu R, Li C. Multidimensional least-squares fitting with a fuzzy model. Fuzzy Sets and Systems 2001;119:215-23.]. If there exists outliers in the data set, robust methods are preferred to estimate parameters values. We proposed a fuzzy robust regression method by using fuzzy numbers when x is crisp and Y is a triangular fuzzy number and in case of outliers in the data set, a weight matrix was defined by the membership function of the residuals. In the fuzzy robust regression, fuzzy sets and fuzzy regression analysis was used in ranking of residuals and in estimation of regression parameters, respectively [Sanli K, Apaydin A. Fuzzy robust regression analysis based on the ranking of fuzzy sets. Inernat. J. Uncertainty Fuzziness and Knowledge-Based Syst 2008;16:663-81.]. In this study, standard deviation estimations are obtained for the parameters by the defined weight matrix. Moreover, we propose another point of view in hypotheses testing for parameters.

  8. Do the risk factors for type 2 diabetes mellitus vary by location? A spatial analysis of health insurance claims in Northeastern Germany using kernel density estimation and geographically weighted regression.

    Science.gov (United States)

    Kauhl, Boris; Schweikart, Jürgen; Krafft, Thomas; Keste, Andrea; Moskwyn, Marita

    2016-11-03

    The provision of general practitioners (GPs) in Germany still relies mainly on the ratio of inhabitants to GPs at relatively large scales and barely accounts for an increased prevalence of chronic diseases among the elderly and socially underprivileged populations. Type 2 Diabetes Mellitus (T2DM) is one of the major cost-intensive diseases with high rates of potentially preventable complications. Provision of healthcare and access to preventive measures is necessary to reduce the burden of T2DM. However, current studies on the spatial variation of T2DM in Germany are mostly based on survey data, which do not only underestimate the true prevalence of T2DM, but are also only available on large spatial scales. The aim of this study is therefore to analyse the spatial distribution of T2DM at fine geographic scales and to assess location-specific risk factors based on data of the AOK health insurance. To display the spatial heterogeneity of T2DM, a bivariate, adaptive kernel density estimation (KDE) was applied. The spatial scan statistic (SaTScan) was used to detect areas of high risk. Global and local spatial regression models were then constructed to analyze socio-demographic risk factors of T2DM. T2DM is especially concentrated in rural areas surrounding Berlin. The risk factors for T2DM consist of proportions of 65-79 year olds, 80 + year olds, unemployment rate among the 55-65 year olds, proportion of employees covered by mandatory social security insurance, mean income tax, and proportion of non-married couples. However, the strength of the association between T2DM and the examined socio-demographic variables displayed strong regional variations. The prevalence of T2DM varies at the very local level. Analyzing point data on T2DM of northeastern Germany's largest health insurance provider thus allows very detailed, location-specific knowledge about increased medical needs. Risk factors associated with T2DM depend largely on the place of residence of the

  9. Relationship between Multiple Regression and Selected Multivariable Methods.

    Science.gov (United States)

    Schumacker, Randall E.

    The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…

  10. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    Science.gov (United States)

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  11. [Understanding logistic regression].

    Science.gov (United States)

    El Sanharawi, M; Naudet, F

    2013-10-01

    Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.

  12. Constrained Sparse Galerkin Regression

    CERN Document Server

    Loiseau, Jean-Christophe

    2016-01-01

    In this work, we demonstrate the use of sparse regression techniques from machine learning to identify nonlinear low-order models of a fluid system purely from measurement data. In particular, we extend the sparse identification of nonlinear dynamics (SINDy) algorithm to enforce physical constraints in the regression, leading to energy conservation. The resulting models are closely related to Galerkin projection models, but the present method does not require the use of a full-order or high-fidelity Navier-Stokes solver to project onto basis modes. Instead, the most parsimonious nonlinear model is determined that is consistent with observed measurement data and satisfies necessary constraints. The constrained Galerkin regression algorithm is implemented on the fluid flow past a circular cylinder, demonstrating the ability to accurately construct models from data.

  13. Practical Session: Logistic Regression

    Science.gov (United States)

    Clausel, M.; Grégoire, G.

    2014-12-01

    An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

  14. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  15. Association of Second and Third Trimester Weight Gain in Pregnancy with Maternal and Fetal Outcomes

    Science.gov (United States)

    Drehmer, Michele; Duncan, Bruce Bartholow; Kac, Gilberto; Schmidt, Maria Inês

    2013-01-01

    Objective To investigate the association between weekly weight gain, during the second and third trimesters, classified according to the 2009 Institute of Medicine (IOM/NRC) recommendations, and maternal and fetal outcomes. Methods Gestational weight gain was evaluated in 2,244 pregnant women of the Brazilian Study of Gestational Diabetes (Estudo Brasileiro do Diabetes Gestacional – EBDG). Outcomes were cesarean delivery, preterm birth and small or large for gestational age birth (SGA, LGA). Associations between inadequate weight gain and outcomes were estimated using robust Poisson regression adjusting for pre-pregnancy body mass index, trimester-specific weight gain, age, height, skin color, parity, education, smoking, alcohol consumption, gestational diabetes and hypertensive disorders in pregnancy. Results In fully adjusted models, in the second trimester, insufficient weight gain was associated with SGA (relative risk [RR] 1.72, 95% confidence interval [CI] 1.26–2.33), and excessive weight gain with LGA (RR 1.64, 95% CI 1.16–2.31); in third trimester, excessive weight gain with preterm birth (RR 1.70, 95% CI 1.08–2.70) and cesarean delivery (RR 1.21, 95% CI 1.03–1.44). Women with less than recommended gestational weight gain in the 2nd trimester had a lesser risk of cesarean deliveries (RR 0.82, 95% CI 0.71–0.96) than women with adequate gestational weight gain in this trimester. Conclusion Though insufficient weight gain in the 3rd trimester was not associated with adverse outcomes, other deviations from recommended weight gain during second and third trimester were associated with adverse pregnancy outcomes. These findings support, in part, the 2009 IOM/NRC recommendations for nutritional monitoring during pregnancy. PMID:23382944

  16. Association of second and third trimester weight gain in pregnancy with maternal and fetal outcomes.

    Directory of Open Access Journals (Sweden)

    Michele Drehmer

    Full Text Available OBJECTIVE: To investigate the association between weekly weight gain, during the second and third trimesters, classified according to the 2009 Institute of Medicine (IOM/NRC recommendations, and maternal and fetal outcomes. METHODS: Gestational weight gain was evaluated in 2,244 pregnant women of the Brazilian Study of Gestational Diabetes (Estudo Brasileiro do Diabetes Gestacional--EBDG. Outcomes were cesarean delivery, preterm birth and small or large for gestational age birth (SGA, LGA. Associations between inadequate weight gain and outcomes were estimated using robust Poisson regression adjusting for pre-pregnancy body mass index, trimester-specific weight gain, age, height, skin color, parity, education, smoking, alcohol consumption, gestational diabetes and hypertensive disorders in pregnancy. RESULTS: In fully adjusted models, in the second trimester, insufficient weight gain was associated with SGA (relative risk [RR] 1.72, 95% confidence interval [CI] 1.26-2.33, and excessive weight gain with LGA (RR 1.64, 95% CI 1.16-2.31; in third trimester, excessive weight gain with preterm birth (RR 1.70, 95% CI 1.08-2.70 and cesarean delivery (RR 1.21, 95% CI 1.03-1.44. Women with less than recommended gestational weight gain in the 2nd trimester had a lesser risk of cesarean deliveries (RR 0.82, 95% CI 0.71-0.96 than women with adequate gestational weight gain in this trimester. CONCLUSION: Though insufficient weight gain in the 3rd trimester was not associated with adverse outcomes, other deviations from recommended weight gain during second and third trimester were associated with adverse pregnancy outcomes. These findings support, in part, the 2009 IOM/NRC recommendations for nutritional monitoring during pregnancy.

  17. Stochastic poisson equations associated to lie algebroids and some refinements of a principal bundle

    CERN Document Server

    Ivan, Gheorghe

    2010-01-01

    The aim of this paper is to present the stochastic Poisson equations associated to Lie algebroids. The stochastic Poisson equations associated to a refinement of a concrete principal bundle are determined.

  18. Noncommutative Poisson structures, derived representation schemes and Calabi-Yau algebras

    CERN Document Server

    Berest, Yuri; Eshmatov, Farkhod; Ramadoss, Ajay

    2012-01-01

    Recantly, William Crawley-Boevey proposed the definition of a Poisson structure on a noncommutative algebra $A$ based on the Kontsevich principle. His idea was to find the {\\it weakest} possible structure on $A$ that induces standard (commutative) Poisson structures on all representation spaces $ \\Rep_V(A) $. It turns out that such a weak Poisson structure on $A$ is a Lie algebra bracket on the 0-th cyclic homology $ \\HC_0(A) $ satisfying some extra conditions; it was thus called in an {\\it $ H_0$-Poisson structure}. This paper studies a higher homological extension of this construction. In our more general setting, we show that noncommutative Poisson structures in the above sense behave nicely with respect to homotopy (in the sense that homotopy equivalent NC Poisson structures on $A$ induce (via the derived representation functor) homotopy equivalent Poisson algebra structures on the derved representation schemes $\\DRep_V(A) $). For an ordinary algebra $A$, a noncommutative Poisson structure on a semifree (...

  19. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  20. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  1. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  2. Software Regression Verification

    Science.gov (United States)

    2013-12-11

    of recursive procedures. Acta Informatica , 45(6):403 – 439, 2008. [GS11] Benny Godlin and Ofer Strichman. Regression verifica- tion. Technical Report...functions. Therefore, we need to rede - fine m-term. – Mutual termination. If either function f or function f ′ (or both) is non- deterministic, then their

  3. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  4. Regression Models for Count Data in R

    Directory of Open Access Journals (Sweden)

    Christian Kleiber

    2008-06-01

    Full Text Available The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of hurdle and zero-inflated regression models in the functions hurdle( and zeroinfl( from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both hurdle and zero-inflated model, are able to incorporate over-dispersion and excess zeros-two problems that typically occur in count data sets in economics and the social sciences—better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice.

  5. 2D sigma models and differential Poisson algebras

    Science.gov (United States)

    Arias, Cesar; Boulanger, Nicolas; Sundell, Per; Torres-Gomez, Alexander

    2015-08-01

    We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to a worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model.

  6. 2D sigma models and differential Poisson algebras

    CERN Document Server

    Arias, Cesar; Sundell, Per; Torres-Gomez, Alexander

    2015-01-01

    We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to any worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model.

  7. A COMPOUND POISSON MODEL FOR LEARNING DISCRETE BAYESIAN NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Abdelaziz GHRIBI; Afif MASMOUDI

    2013-01-01

    We introduce here the concept of Bayesian networks, in compound Poisson model, which provides a graphical modeling framework that encodes the joint probability distribution for a set of random variables within a directed acyclic graph. We suggest an approach proposal which offers a new mixed implicit estimator. We show that the implicit approach applied in compound Poisson model is very attractive for its ability to understand data and does not require any prior information. A comparative study between learned estimates given by implicit and by standard Bayesian approaches is established. Under some conditions and based on minimal squared error calculations, we show that the mixed implicit estimator is better than the standard Bayesian and the maximum likelihood estimators. We illustrate our approach by considering a simulation study in the context of mobile communication networks.

  8. 2D sigma models and differential Poisson algebras

    Energy Technology Data Exchange (ETDEWEB)

    Arias, Cesar [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Boulanger, Nicolas [Service de Mécanique et Gravitation, Université de Mons - UMONS,20 Place du Parc, 7000 Mons (Belgium); Laboratoire de Mathématiques et Physique Théorique,Unité Mixte de Recherche 7350 du CNRS, Fédération de Recherche 2964 Denis Poisson,Université François Rabelais, Parc de Grandmont, 37200 Tours (France); Sundell, Per [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Torres-Gomez, Alexander [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Instituto de Ciencias Físicas y Matemáticas, Universidad Austral de Chile-UACh,Valdivia (Chile)

    2015-08-18

    We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to a worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model.

  9. Reference manual for the POISSON/SUPERFISH Group of Codes

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.

  10. Finite-size effects and percolation properties of Poisson geometries

    Science.gov (United States)

    Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.

  11. Poisson Sigma Model with branes and hyperelliptic Riemann surfaces

    CERN Document Server

    Ferrario, Andrea

    2007-01-01

    We derive the explicit form of the superpropagators in presence of general boundary conditions (coisotropic branes) for the Poisson Sigma Model. This generalizes the results presented in Cattaneo and Felder's previous works for the Kontsevich angle function used in the deformation quantization program of Poisson manifolds without branes or with at most two branes. The relevant superpropagators for n branes are defined as gauge fixed homotopy operators of a complex of differential forms on n sided polygons P_n with particular "alternating" boundary conditions. In presence of more than three branes we use first order Riemann theta functions with odd singular characteristics on the Jacobian variety of a hyperelliptic Riemann surface (canonical setting). In genus g the superpropagators present g zero modes contributions.

  12. The Schr\\"odinger-Poisson system on the sphere

    CERN Document Server

    Gérard, Patrick

    2010-01-01

    We study the Schr\\"odinger-Poisson system on the unit sphere $\\SS^2$ of $\\RR^3$, modeling the quantum transport of charged particles confined on a sphere by an external potential. Our first results concern the Cauchy problem for this system. We prove that this problem is regularly well-posed on every $H^s(\\SS ^2)$ with $s>0$, and not uniformly well-posed on $L^2(\\SS ^2)$. The proof of well-posedness relies on multilinear Strichartz estimates, the proof of ill-posedness relies on the construction of a counterexample which concentrates exponentially on a closed geodesic. In a second part of the paper, we prove that this model can be obtained as the limit of the three dimensional Schr\\"odinger-Poisson system, singularly perturbed by an external potential that confines the particles in the vicinity of the sphere.

  13. Histogram bin width selection for time-dependent Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Shinsuke; Shinomoto, Shigeru [Department of Physics, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan)

    2004-07-23

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.

  14. Probability distributions for Poisson processes with pile-up

    CERN Document Server

    Sevilla, Diego J R

    2013-01-01

    In this paper, two parametric probability distributions capable to describe the statistics of X-ray photon detection by a CCD are presented. They are formulated from simple models that account for the pile-up phenomenon, in which two or more photons are counted as one. These models are based on the Poisson process, but they have an extra parameter which includes all the detailed mechanisms of the pile-up process that must be fitted to the data statistics simultaneously with the rate parameter. The new probability distributions, one for number of counts per time bins (Poisson-like), and the other for waiting times (exponential-like) are tested fitting them to statistics of real data, and between them through numerical simulations, and their results are analyzed and compared. The probability distributions presented here can be used as background statistical models to derive likelihood functions for statistical methods in signal analysis.

  15. A fast unbinned test on event clustering in Poisson processes

    CERN Document Server

    Prahl, J

    1999-01-01

    An unbinned statistical test on cluster-like deviations from Poisson processes for point process data is introduced, presented in the context of time variability analysis of astrophysical sources in count rate experiments. The measure of deviation of the actually obtained temporal event distribution from that of a Poisson process is derived from the distribution of time differences between two consecutive events in a natural way. The differential character of the measure suggests this test in particular for the search of irregular burst-like structures in experimental data. The construction allows the application of the test even for very low event numbers. Furthermore, the test can easily be applied in the case of varying acceptance of the detector as well. The simple and direct use of background events simultaneously acquired under the same conditions to account for acceptance variations is possible, allowing for easy application especially in earth-bound gamma-ray experiments. Central features are the fast...

  16. Histogram bin width selection for time-dependent Poisson processes

    Science.gov (United States)

    Koyama, Shinsuke; Shinomoto, Shigeru

    2004-07-01

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.

  17. Poisson structures in BRST-antiBRST invariant Lagrangian formalism

    CERN Document Server

    Geyer, B; Nersessian, A P; Geyer, Bodo; Lavrov, Petr; Nersessian, Armen

    2001-01-01

    We show that the specific operators V^a appearing in the triplectic formalism can be viewed as the anti-Hamiltonian vector fields generated by a second rank irreducible Sp(2) tensor. This allows for an explicit realization of the triplectic algebra being constructed from an arbitrary Poisson bracket on the space of the fields only. We show that the whole space of fields and antifields can be equipped with an even supersymplectic structure when this Poisson bracket is non-degenerate. This observation opens the possibility to provide the BRST/antiBRST path integral by a well-defined integration measure, as well as to establish a direct link between the Sp(2) symmetric Lagrangian and Hamiltonian BRST quantization schemes.

  18. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  19. An adaptive fast multipole accelerated Poisson solver for complex geometries

    Science.gov (United States)

    Askham, T.; Cerfon, A. J.

    2017-09-01

    We present a fast, direct and adaptive Poisson solver for complex two-dimensional geometries based on potential theory and fast multipole acceleration. More precisely, the solver relies on the standard decomposition of the solution as the sum of a volume integral to account for the source distribution and a layer potential to enforce the desired boundary condition. The volume integral is computed by applying the FMM on a square box that encloses the domain of interest. For the sake of efficiency and convergence acceleration, we first extend the source distribution (the right-hand side in the Poisson equation) to the enclosing box as a C0 function using a fast, boundary integral-based method. We demonstrate on multiply connected domains with irregular boundaries that this continuous extension leads to high accuracy without excessive adaptive refinement near the boundary and, as a result, to an extremely efficient ;black box; fast solver.

  20. Events in time: Basic analysis of Poisson data

    Energy Technology Data Exchange (ETDEWEB)

    Engelhardt, M.E.

    1994-09-01

    The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.

  1. Statistical Tests of the PTHA Poisson Assumption for Submarine Landslides

    Science.gov (United States)

    Geist, E. L.; Chaytor, J. D.; Parsons, T.; Ten Brink, U. S.

    2012-12-01

    We demonstrate that a sequence of dated mass transport deposits (MTDs) can provide information to statistically test whether or not submarine landslides associated with these deposits conform to a Poisson model of occurrence. Probabilistic tsunami hazard analysis (PTHA) most often assumes Poissonian occurrence for all sources, with an exponential distribution of return times. Using dates that define the bounds of individual MTDs, we first describe likelihood and Monte Carlo methods of parameter estimation for a suite of candidate occurrence models (Poisson, lognormal, gamma, Brownian Passage Time). In addition to age-dating uncertainty, both methods incorporate uncertainty caused by the open time intervals: i.e., before the first and after the last event to the present. Accounting for these open intervals is critical when there are a small number of observed events. The optimal occurrence model is selected according to both the Akaike Information Criteria (AIC) and Akaike's Bayesian Information Criterion (ABIC). In addition, the likelihood ratio test can be performed on occurrence models from the same family: e.g., the gamma model relative to the exponential model of return time distribution. Parameter estimation, model selection, and hypothesis testing are performed on data from two IODP holes in the northern Gulf of Mexico that penetrated a total of 14 MTDs, some of which are correlated between the two holes. Each of these events has been assigned an age based on microfossil zonations and magnetostratigraphic datums. Results from these sites indicate that the Poisson assumption is likely valid. However, parameter estimation results using the likelihood method for one of the sites suggest that the events may have occurred quasi-periodically. Methods developed in this study provide tools with which one can determine both the rate of occurrence and the statistical validity of the Poisson assumption when submarine landslides are included in PTHA.

  2. Dimension free and infinite variance tail estimates on Poisson space

    OpenAIRE

    Breton, J. C.; Houdré, C.; Privault, N.

    2004-01-01

    Concentration inequalities are obtained on Poisson space, for random functionals with finite or infinite variance. In particular, dimension free tail estimates and exponential integrability results are given for the Euclidean norm of vectors of independent functionals. In the finite variance case these results are applied to infinitely divisible random variables such as quadratic Wiener functionals, including L\\'evy's stochastic area and the square norm of Brownian paths. In the infinite vari...

  3. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

    Science.gov (United States)

    2007-09-01

    unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ...Pless. Figure 1 shows, for each algorithm, the point cloud with each point colored and marked differently according to its classification. In the dif...1: Clustering of a spiral and a plane. Results with different algorithms (this is a color figure). Due to the statistical nature of the R-TPMM

  4. On terminating Poisson processes in some shock models

    Energy Technology Data Exchange (ETDEWEB)

    Finkelstein, Maxim, E-mail: FinkelMI@ufs.ac.z [Department of Mathematical Statistics, University of the Free State, Bloemfontein (South Africa); Max Planck Institute for Demographic Research, Rostock (Germany); Marais, Francois, E-mail: fmarais@csc.co [CSC, Cape Town (South Africa)

    2010-08-15

    A system subject to a point process of shocks is considered. Shocks occur in accordance with the homogeneous Poisson process. Different criteria of system failure (termination) are discussed and the corresponding probabilities of failure (accident)-free performance are derived. The described analytical approach is based on deriving integral equations for each setting and solving these equations through the Laplace transform. Some approximations are analyzed and further generalizations and applications are discussed.

  5. Generalized Poisson processes in quantum mechanics and field theory

    Energy Technology Data Exchange (ETDEWEB)

    Combe, P.; Rodriguez, R. (Centre National de la Recherche Scientifique, 13 - Marseille (France). Faculte des Sciences de Luminy); Hoegh-Krohn, R.; Sirugue, M.; Sirugue-Collin, M.

    1981-11-01

    In section 2 we describe more carefully the generalized Poisson processes, giving a realization of the underlying probability space, and we characterize these processes by their characteristic functionals. Section 3 is devoted to the proof of the previous formula for quantum mechanical systems, with possibly velocity dependent potentials and in section 4 we give an application of the previous theory to some relativistic Bose field models.

  6. Experimental dead-time distortions of poisson processes

    Science.gov (United States)

    Faraci, G.; Pennisi, A. R.

    1983-07-01

    In order to check the distortions, introduced by a non-extended dead time on the Poisson statistics, accurate experiments have been made in single channel counting. At a given measuring time, the dependence on the choice of the time origin and on the width of the dead time has been verified. An excellent agreement has been found between the theoretical expressions and the experimental curves.

  7. Experimental dead-time distortions of Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Faraci, G.; Pennisi, A.R. (Catania Univ. (Italy). Ist. di Fisica; Consiglio Nazionale delle Ricerche, Catania (Italy). Gruppo Nazionale di Struttura della Materia)

    1983-07-01

    In order to check the distortions, introduced by a non-extended dead time on the Poisson statistics, accurate experiments have been made in single channel counting. At a given measuring time, the dependence on the choice of the time origin and on the width of the dead time has been verified. An excellent agreement has been found between the theoretical expressions and the experimental curves.

  8. Comparing Poisson Sigma Model with A-model

    Science.gov (United States)

    Bonechi, F.; Cattaneo, A. S.; Iraso, R.

    2016-10-01

    We discuss the A-model as a gauge fixing of the Poisson Sigma Model with target a symplectic structure. We complete the discussion in [4], where a gauge fixing defined by a compatible complex structure was introduced, by showing how to recover the A-model hierarchy of observables in terms of the AKSZ observables. Moreover, we discuss the off-shell supersymmetry of the A-model as a residual BV symmetry of the gauge fixed PSM action.

  9. Daisy Models Semi-Poisson statistics and beyond

    CERN Document Server

    Hernández-Saldaña, H; Seligman, T H

    1999-01-01

    Semi-Poisson statistics are shown to be obtained by removing every other number from a random sequence. Retaining every (r+1)th level we obtain a family of secuences which we call daisy models. Their statistical properties coincide with those of Bogomolny's nearest-neighbour interaction Coulomb gas if the inverse temperature coincides with the integer r. In particular the case r=2 reproduces closely the statistics of quasi-optimal solutions of the traveling salesman problem.

  10. Group-buying inventory policy with demand under Poisson process

    Directory of Open Access Journals (Sweden)

    Tammarat Kleebmek

    2016-02-01

    Full Text Available The group-buying is the modern business of selling in the uncertain market. With an objective to minimize costs for sellers arising from ordering and reordering, we present in this paper the group buying inventory model, with the demand governed by a Poisson process and the product sale distributed as Binomial distribution. The inventory level is under continuous review, while the lead time is fixed. A numerical example is illustrated.

  11. Heteroscedasticity checks for regression models

    Institute of Scientific and Technical Information of China (English)

    ZHU; Lixing

    2001-01-01

    [1]Carroll, R. J., Ruppert, D., Transformation and Weighting in Regression, New York: Chapman and Hall, 1988.[2]Cook, R. D., Weisberg, S., Diagnostics for heteroscedasticity in regression, Biometrika, 1988, 70: 1—10.[3]Davidian, M., Carroll, R. J., Variance function estimation, J. Amer. Statist. Assoc., 1987, 82: 1079—1091.[4]Bickel, P., Using residuals robustly I: Tests for heteroscedasticity, Ann. Statist., 1978, 6: 266—291.[5]Carroll, R. J., Ruppert, D., On robust tests for heteroscedasticity, Ann. Statist., 1981, 9: 205—209.[6]Eubank, R. L., Thomas, W., Detecting heteroscedasticity in nonparametric regression, J. Roy. Statist. Soc., Ser. B, 1993, 55: 145—155.[7]Diblasi, A., Bowman, A., Testing for constant variance in a linear model, Statist. and Probab. Letters, 1997, 33: 95—103.[8]Dette, H., Munk, A., Testing heteoscedasticity in nonparametric regression, J. R. Statist. Soc. B, 1998, 60: 693—708.[9]Müller, H. G., Zhao, P. L., On a semi-parametric variance function model and a test for heteroscedasticity, Ann. Statist., 1995, 23: 946—967.[10]Stute, W., Manteiga, G., Quindimil, M. P., Bootstrap approximations in model checks for regression, J. Amer. Statist. Asso., 1998, 93: 141—149.[11]Stute, W., Thies, G., Zhu, L. X., Model checks for regression: An innovation approach, Ann. Statist., 1998, 26: 1916—1939.[12]Shorack, G. R., Wellner, J. A., Empirical Processes with Applications to Statistics, New York: Wiley, 1986.[13]Efron, B., Bootstrap methods: Another look at the jackknife, Ann. Statist., 1979, 7: 1—26.[14]Wu, C. F. J., Jackknife, bootstrap and other re-sampling methods in regression analysis, Ann. Statist., 1986, 14: 1261—1295.[15]H rdle, W., Mammen, E., Comparing non-parametric versus parametric regression fits, Ann. Statist., 1993, 21: 1926—1947.[16]Liu, R. Y., Bootstrap procedures under some non-i.i.d. models, Ann. Statist., 1988, 16: 1696—1708.[17

  12. Zero-modified Poisson model: Bayesian approach, influence diagnostics, and an application to a Brazilian leptospirosis notification data.

    Science.gov (United States)

    Conceição, Katiane S; Andrade, Marinho G; Louzada, Francisco

    2013-09-01

    In this paper, a Bayesian method for inference is developed for the zero-modified Poisson (ZMP) regression model. This model is very flexible for analyzing count data without requiring any information about inflation or deflation of zeros in the sample. A general class of prior densities based on an information matrix is considered for the model parameters. A sensitivity study to detect influential cases that can change the results is performed based on the Kullback-Leibler divergence. Simulation studies are presented in order to illustrate the performance of the developed methodology. Two real datasets on leptospirosis notification in Bahia State (Brazil) are analyzed using the proposed methodology for the ZMP model.

  13. Decomposition of almost Poisson structure of non-self-adjoint dynamical systems

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Non-self-adjoint dynamical systems, e.g., nonholonomic systems, can admit an almost Poisson structure, which is formulated by a kind of Poisson bracket satisfying the usual properties except for the Jacobi identity. A general theory of the almost Poisson structure is investigated based on a decompo- sition of the bracket into a sum of a Poisson one and an almost Poisson one. The corresponding rela- tion between Poisson structure and symplectic structure is proved, making use of Jacobiizer and symplecticizer. Based on analysis of pseudo-symplectic structure of constraint submanifold of Chaplygin’s nonholonomic systems, an almost Poisson bracket for the systems is constructed and decomposed into a sum of a canonical Poisson one and an almost Poisson one. Similarly, an almost Poisson structure, which can be decomposed into a sum of canonical one and an almost "Lie-Poisson" one, is also constructed on an affine space with torsion whose autoparallels are utilized to describe the free motion of some non-self-adjoint systems. The decomposition of the almost Poisson bracket di- rectly leads to a decomposition of a dynamical vector field into a sum of usual Hamiltionian vector field and an almost Hamiltonian one, which is useful to simplifying the integration of vector fields.

  14. Brain, music, and non-Poisson renewal processes

    Science.gov (United States)

    Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo

    2007-06-01

    In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5music composition yield μmusic on the human brain.

  15. Differential expression analysis for RNAseq using Poisson mixed models.

    Science.gov (United States)

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Modeling environmental noise exceedances using non-homogeneous Poisson processes.

    Science.gov (United States)

    Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R

    2014-10-01

    In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.

  17. Probability Measure of Navigation pattern predition using Poisson Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Dr.V.Valli Mayil

    2012-06-01

    Full Text Available The World Wide Web has become one of the most important media to store, share and distribute information. The rapid expansion of the web has provided a great opportunity to study user and system behavior by exploring web access logs. Web Usage Mining is the application of data mining techniques to large web data repositories in order to extract usage patterns. Every web server keeps a log of all transactions between the server and the clients. The log data which are collected by web servers contains information about every click of user to the web documents of the site. The useful log information needs to be analyzed and interpreted in order to obtainknowledge about actual user preferences in accessing web pages. In recent years several methods have been proposed for mining web log data. This paper addresses the statistical method of Poisson distribution analysis to find out the higher probability session sequences which is then used to test the web application performance.The analysis of large volumes of click stream data demands the employment of data mining methods. Conducting data mining on logs of web servers involves the determination of frequently occurring access sequences. A statistical poisson distribution shows the frequency probability of specific events when the average probability of a single occurrence is known. The Poisson distribution is a discrete function wich is used in this paper to find out the probability frequency of particular page is visited by the user.

  18. Free Appearance-Editing with Improved Poisson Image Cloning

    Institute of Scientific and Technical Information of China (English)

    Xiao-Hui Bie; Hao-Da Huang; Wen-Cheng Wang

    2011-01-01

    In this paper,we present a new edit tool for the user to conveniently preserve or freely edit the object appearance during seamless image composition.We observe that though Poisson image editing is effective for seamless image composition.Its color bleeding (the color of the target image is propagated into the source image) is not always desired in applications,and it provides no way to allow the user to edit the appearance of the source image.To make it more flexible and practical,we introduce new energy terms to control the appearance change,and integrate them into the Poisson image editing framework.The new energy function could still be realized using efficient sparse linear solvers,and the user can interactively refine the constraints.With the new tool,the user can enjoy not only seamless image composition,but also the flexibility to preserve or manipulate the appearance of the source image at the same time.This provides more potential for creating new images.Experimental results demonstrate the effectiveness of our new edit tool,with similar time cost to the original Poisson image editing.

  19. Poisson Downward Continuation Solution by the Jacobi Method

    Science.gov (United States)

    Kingdon, R.; Vaníček, P.

    2011-03-01

    Downward continuation is a continuing problem in geodesy and geophysics. Inversion of the discrete form of the Poisson integration process provides a numerical solution to the problem, but because the B matrix that defines the discrete Poisson integration is not always well conditioned the solution may be noisy in situations where the discretization step is small and in areas containing large heights. We provide two remedies, both in the context of the Jacobi iterative solution to the Poisson downward continuation problem. First, we suggest testing according to the upward continued result from each solution, rather then testing between successive solutions on the geoid, so that choice of a tolerance for the convergence of the iterative method is more meaningful and intuitive. Second, we show how a tolerance that reflects the conditioning of the B matrix can regularize the solution, and suggest an approximate way of choosing such a tolerance. Using these methods, we are able to calculate a solution that appears regular in an area of Papua New Guinea having heights over 3200 m, over a grid with 1 arc-minute spacing, based on a very poorly conditioned B matrix.

  20. Lattice Metamaterials with Mechanically Tunable Poisson's Ratio for Vibration Control

    Science.gov (United States)

    Chen, Yanyu; Li, Tiantian; Scarpa, Fabrizio; Wang, Lifeng

    2017-02-01

    Metamaterials with artificially designed architectures are increasingly considered as new paradigmatic material systems with unusual physical properties. Here, we report a class of architected lattice metamaterials with mechanically tunable negative Poisson's ratios and vibration-mitigation capability. The proposed lattice metamaterials are built by replacing regular straight beams with sinusoidally shaped ones, which are highly stretchable under uniaxial tension. Our experimental and numerical results indicate that the proposed lattices exhibit extreme Poisson's-ratio variations between -0.7 and 0.5 over large tensile deformations up to 50%. This large variation of Poisson's-ratio values is attributed to the deformation pattern switching from bending to stretching within the sinusoidally shaped beams. The interplay between the multiscale (ligament and cell) architecture and wave propagation also enables remarkable broadband vibration-mitigation capability of the lattice metamaterials, which can be dynamically tuned by an external mechanical stimulus. The material design strategy provides insights into the development of classes of architected metamaterials with potential applications including energy absorption, tunable acoustics, vibration control, responsive devices, soft robotics, and stretchable electronics.

  1. Magnetic alignment and the Poisson alignment reference system

    Science.gov (United States)

    Griffith, L. V.; Schenz, R. F.; Sommargren, G. E.

    1990-08-01

    Three distinct metrological operations are necessary to align a free-electron laser (FEL): the magnetic axis must be located, a straight line reference (SLR) must be generated, and the magnetic axis must be related to the SLR. This article begins with a review of the motivation for developing an alignment system that will assure better than 100-μm accuracy in the alignment of the magnetic axis throughout an FEL. The 100-μm accuracy is an error circle about an ideal axis for 300 m or more. The article describes techniques for identifying the magnetic axes of solenoids, quadrupoles, and wiggler poles. Propagation of a laser beam is described to the extent of revealing sources of nonlinearity in the beam. Development of a straight-line reference based on the Poisson line, a diffraction effect, is described in detail. Spheres in a large-diameter laser beam create Poisson lines and thus provide a necessary mechanism for gauging between the magnetic axis and the SLR. Procedures for installing FEL components and calibrating alignment fiducials to the magnetic axes of the components are also described. The Poisson alignment reference system should be accurate to 25 μm over 300 m, which is believed to be a factor-of-4 improvement over earlier techniques. An error budget shows that only 25% of the total budgeted tolerance is used for the alignment reference system, so the remaining tolerances should fall within the allowable range for FEL alignment.

  2. Magnetic axis alignment and the Poisson alignment reference system

    Science.gov (United States)

    Griffith, Lee V.; Schenz, Richard F.; Sommargren, Gary E.

    1989-01-01

    Three distinct metrological operations are necessary to align a free-electron laser (FEL): the magnetic axis must be located, a straight line reference (SLR) must be generated, and the magnetic axis must be related to the SLR. This paper begins with a review of the motivation for developing an alignment system that will assure better than 100 micrometer accuracy in the alignment of the magnetic axis throughout an FEL. The paper describes techniques for identifying the magnetic axis of solenoids, quadrupoles, and wiggler poles. Propagation of a laser beam is described to the extent of revealing sources of nonlinearity in the beam. Development and use of the Poisson line, a diffraction effect, is described in detail. Spheres in a large-diameter laser beam create Poisson lines and thus provide a necessary mechanism for gauging between the magnetic axis and the SLR. Procedures for installing FEL components and calibrating alignment fiducials to the magnetic axes of the components are also described. An error budget shows that the Poisson alignment reference system will make it possible to meet the alignment tolerances for an FEL.

  3. Breast segmentation in MRI using Poisson surface reconstruction initialized with random forest edge detection

    Science.gov (United States)

    Martel, Anne L.; Gallego-Ortiz, Cristina; Lu, YingLi

    2016-03-01

    Segmentation of breast tissue in MRI images is an important pre-processing step for many applications. We present a new method that uses a random forest classifier to identify candidate edges in the image and then applies a Poisson reconstruction step to define a 3D surface based on the detected edge points. Using a leave one patient out cross validation we achieve a Dice overlap score of 0.96 +/- 0.02 for T1 weighted non-fat suppressed images in 8 patients. In a second dataset of 332 images acquired using a Dixon sequence, which was not used in training the random classifier, the mean Dice score was 0.90 +/- 0.03. Using this approach we have achieved accurate, robust segmentation results using a very small training set.

  4. The Importance of Structure Coefficients in Interpreting Regression Research.

    Science.gov (United States)

    Heidgerken, Amanda D.

    The paper stresses the importance of consulting beta weights and structure coefficients in the interpretation of regression results. The effects of multilinearity and suppressors and their effects on interpretation of beta weights are discussed. It is concluded that interpretations based on beta weights only can lead the unwary researcher to…

  5. Low rank Multivariate regression

    CERN Document Server

    Giraud, Christophe

    2010-01-01

    We consider in this paper the multivariate regression problem, when the target regression matrix $A$ is close to a low rank matrix. Our primary interest in on the practical case where the variance of the noise is unknown. Our main contribution is to propose in this setting a criterion to select among a family of low rank estimators and prove a non-asymptotic oracle inequality for the resulting estimator. We also investigate the easier case where the variance of the noise is known and outline that the penalties appearing in our criterions are minimal (in some sense). These penalties involve the expected value of the Ky-Fan quasi-norm of some random matrices. These quantities can be evaluated easily in practice and upper-bounds can be derived from recent results in random matrix theory.

  6. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  7. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  8. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    . There are, however, decreasing returns to aid, and the estimated effectiveness of aid is highly sensitive to the choice of estimator and the set of control variables. When investment and human capital are controlled for, no positive effect of aid is found. Yet, aid continues to impact on growth via...... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes....

  9. Robust Nonstationary Regression

    OpenAIRE

    1993-01-01

    This paper provides a robust statistical approach to nonstationary time series regression and inference. Fully modified extensions of traditional robust statistical procedures are developed which allow for endogeneities in the nonstationary regressors and serial dependence in the shocks that drive the regressors and the errors that appear in the equation being estimated. The suggested estimators involve semiparametric corrections to accommodate these possibilities and they belong to the same ...

  10. TWO REGRESSION CREDIBILITY MODELS

    Directory of Open Access Journals (Sweden)

    Constanţa-Nicoleta BODEA

    2010-03-01

    Full Text Available In this communication we will discuss two regression credibility models from Non – Life Insurance Mathematics that can be solved by means of matrix theory. In the first regression credibility model, starting from a well-known representation formula of the inverse for a special class of matrices a risk premium will be calculated for a contract with risk parameter θ. In the next regression credibility model, we will obtain a credibility solution in the form of a linear combination of the individual estimate (based on the data of a particular state and the collective estimate (based on aggregate USA data. To illustrate the solution with the properties mentioned above, we shall need the well-known representation theorem for a special class of matrices, the properties of the trace for a square matrix, the scalar product of two vectors, the norm with respect to a positive definite matrix given in advance and the complicated mathematical properties of conditional expectations and of conditional covariances.

  11. Geostatistical analysis of disease data: estimation of cancer mortality risk from empirical frequencies using Poisson kriging

    Directory of Open Access Journals (Sweden)

    Goovaerts Pierre

    2005-12-01

    Full Text Available Abstract Background Cancer mortality maps are used by public health officials to identify areas of excess and to guide surveillance and control activities. Quality of decision-making thus relies on an accurate quantification of risks from observed rates which can be very unreliable when computed from sparsely populated geographical units or recorded for minority populations. This paper presents a geostatistical methodology that accounts for spatially varying population sizes and spatial patterns in the processing of cancer mortality data. Simulation studies are conducted to compare the performances of Poisson kriging to a few simple smoothers (i.e. population-weighted estimators and empirical Bayes smoothers under different scenarios for the disease frequency, the population size, and the spatial pattern of risk. A public-domain executable with example datasets is provided. Results The analysis of age-adjusted mortality rates for breast and cervix cancers illustrated some key features of commonly used smoothing techniques. Because of the small weight assigned to the rate observed over the entity being smoothed (kernel weight, the population-weighted average leads to risk maps that show little variability. Other techniques assign larger and similar kernel weights but they use a different piece of auxiliary information in the prediction: global or local means for global or local empirical Bayes smoothers, and spatial combination of surrounding rates for the geostatistical estimator. Simulation studies indicated that Poisson kriging outperforms other approaches for most scenarios, with a clear benefit when the risk values are spatially correlated. Global empirical Bayes smoothers provide more accurate predictions under the least frequent scenario of spatially random risk. Conclusion The approach presented in this paper enables researchers to incorporate the pattern of spatial dependence of mortality rates into the mapping of risk values and the

  12. Healthy gestational weight gain prevalence and associated risk factors: A population-based study in the far South of Brazil

    Directory of Open Access Journals (Sweden)

    Luana Patricia MARMITT

    Full Text Available ABSTRACT Objective To measure and identify the factors associated with healthy weight gain during pregnancy in the municipality of Rio Grande, Rio Grande do Sul, Brazil. Methods This was a population-based, cross-sectional study that included all parturient women from the municipality who gave birth at its maternity hospitals in 2013. Information was collected by interview with the mothers in the first 48 hours following parturition and from the prenatal care cards. Healthy weight gain was evaluated according to the Institute of Medicine guidelines. Data analysis used Poisson regression with robust variance using previous hierarchical model. Results Among the 1,784 pregnant participants, 89% attended at least six prenatal care visits, and 32% had healthy weight gain during pregnancy. Higher education level and fewer children resulted in a higher prevalence ratio for healthy weight gain (p=0.003 and p=0.029, respectively. Underweight women at conception had a higher proportion of healthy weight gain (p<0.001. Despite extensive coverage, prenatal care did not affect healthy weight gain during pregnancy (p=0.104. Conclusion The low proportion of women with healthy gestational weight gain suggests a need of better prenatal care services. Women who are overweight, have lower education levels, and had had multiple pregnancies at conception need special attention.

  13. [Study on the relationship between frequencies of prenatal care and neonatal low birth weight in women of childbearing age from rural areas of Shaanxi province].

    Science.gov (United States)

    Li, J M; Liu, D M; Zhang, X F; Qu, P F; Yan, H

    2017-04-10

    Objective: To investigate the relationship between frequencies of prenatal care and neonatal low birth weight (LBW) among women of childbearing age from the rural areas of Shaanxi province. Methods: A questionnaire survey was conducted among the childbearing-aged women from the rural areas. Samples were selected through multi stage stratified random sampling method. The childbearing aged women were in pregnancy or having had definite outcomes of pregnancy. Measurement of data was described by median±standard deviation, and chi square test was used to compare the rates. Neonatal low birth weight and frequencies of prenatal care were dependent variables and independent variables grouped into the generalized Poisson regression model. Confounding factors were under control. Results: The overall incidence rate of LBW was 3.75% among 18 911 rural women of childbearing age during 2010-2013. Frequencies on pregnancy care were up to 15 times (0.70%), with a minimum of 0 (0.70%), an average of 5.65±2.74 times (including ≥10 times accounted for 12.37%; ≥7 times accounted for 28.52%; ≥5 times accounted for 62.80% and controlling confounding factors, results from the generalized Poisson regression analysis revealed that the difference was statistically significant when compared to the reference group. The incidence of neonatal LBW in the 7 group (OR=1.61, 95%CI:1.31-2.00) while in the prenatal care, among women of childbearing age.

  14. Maternal encouragement and discouragement: Differences by food type and child weight status.

    Science.gov (United States)

    Pesch, Megan H; Appugliese, Danielle P; Kaciroti, Niko; Rosenblum, Katherine L; Miller, Alison L; Lumeng, Julie C

    2016-06-01

    Childhood obesity prevention practice guidelines recommend that parents encourage the intake of certain types of foods and discourage the intake of others. It is unknown if parents of children of different weight statuses encourage or discourage their child's intake differently based on food type. The objective of this study was to determine the association of child weight status with maternal encouragement and discouragement of for four different types of food. A total of 222 mother-child dyads were video-taped during the standardized, sequential presentation of four foods to both participants: cupcakes (familiar dessert), green beans (familiar vegetable), halva (unfamiliar dessert) and artichoke (unfamiliar vegetable). Mother's encouragements and discouragements of child intake were reliably coded for each food type. Poisson regression models were used to test the independent association of child weight status (normal weight, overweight and obese) with encouragement and discouragement for each food type. Mothers of an obese, vs. normal or overweight child, had lower rates of encouragement for a familiar dessert (p = 0.02), and a higher rates of discouragements for a familiar dessert (p = 0.001), a familiar vegetable (p = 0.01), and an unfamiliar vegetable (p = 0.001). There were no differences in encouragements or discouragements between mothers of an overweight, vs. obese child, for any of the 4 food types. Mothers of obese children may alter their feeding behavior differentially based on food type. Future work should examine how interventions promoting maternal encouragement or discouragement of different food types impact child weight status.

  15. POISSON COUNT MODELS TO EXPLAIN THE ADOPTION OF AGRICULTURAL AND NATURAL RESOURCE MANAGEMENT TECHNOLOGIES BY SMALL FARMERS IN CENTRAL AMERICAN COUNTRIES

    OpenAIRE

    Ramirez, Octavio A.; Schultz, Steven D.

    2000-01-01

    Evaluations of the factors influencing the adoption of agricultural and natural resource management technologies among small farmers in developing countries have been mostly limited to qualitative discussions or simple descriptive statistics resulting in superficial and inconclusive findings. This study introduces the use of Poisson Count Regressions as a statistically appropriate procedure to analyze certain common types of adoption data. It uses them to assess the impact of key socio-econom...

  16. Long-term exposure to residential traffic noise and changes in body weight and waist circumference

    DEFF Research Database (Denmark)

    Christensen, Jeppe S; Raaschou-Nielsen, Ole; Tjønneland, Anne

    2015-01-01

    exposure to traffic noise was calculated for all participants' present and historical addresses using the Nordic prediction method. The associations between traffic noise and changes in adiposity measures after a mean follow-up of 5.3 years were analyzed by linear and logistic regression with adjustments...... circumference. For example, time-weighted mean exposure 5-years preceding follow-up was associated with a yearly weight gain of 15.4g (95% confidence interval (CI): 2.14; 28.7) and a yearly increase in waist circumference of 0.22mm (95% CI: 0.018; 0.43) per 10dB. Similarly, in Poisson regression models we found......: We aimed to investigate the association between long-term residential traffic noise and changes in adiposity. MATERIALS AND METHODS: The study was based on 39,720 middle-aged Danish men and women from a cohort, with information on weight and waist circumference at two points in time. Residential...

  17. Association between Weight Change, Health Outcomes, and Mortality in Older Residents in Long-Term Care.

    Science.gov (United States)

    Zhou, Wei; Kozikowski, Andrzej; Pekmezaris, Renee; Lolis, James; Tommasulo, Barbara; Fishbein, Joanna; Wolf-Klein, Gisele

    2017-07-01

    Despite the numerous health risks associated with being overweight, the effect of weight loss on health and longevity remains controversial, particularly in older adults. We explored the association among weight changes, health outcomes, and mortality in older residents of a skilled nursing facility. A 6-year retrospective chart review of residents of a long-term care facility was conducted, collecting monthly weights in addition to the clinical and demographic data of all residents for at least 1 year. Weight changes of 5% from baseline month 1 through month 12 were classified as stable, loss, or gain. Demographics, body mass index (BMI), comorbidities, number of hospitalizations, and mortality were analyzed. The association between weight change (and other demographic and clinical variables) and mortality outcomes, as well as number of hospitalizations, was assessed using the χ(2) test, the Fisher exact test, Poisson regression, or negative binomial regression, as appropriate. A total of 116 residents fit inclusion criteria; the median age was 84 years, with 71.6% being women and 88.7% white. The median length of stay was 877.5 days. Median body weight at baseline was 137.3 lb with a BMI of 23.5. More than one-third (36.2%) of residents had stable weight, 37.9% gained weight, and 25.9% lost weight during their stay. Neither weight change category nor baseline BMI was significantly associated with mortality (P = 0.056 and P = 0.518, respectively). Multivariable models showed that receiving supplementation (P = 0.04) and having hypertension (P = 0.04) were significant predictors of mortality after adjusting for the other factors. Losing >5% body weight (compared with maintaining stable weight; P = 0.0097), being a man (P = 0.0104), receiving a supplement (P = 0.0171), and being fed by tube (P = 0.0004) were associated with an increased number of hospitalizations after covariate adjustment. Weight fluctuation and baseline BMI do not appear to be associated with

  18. Area-to-point parameter estimation with geographically weighted regression

    Science.gov (United States)

    Murakami, Daisuke; Tsutsumi, Morito

    2015-07-01

    The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.

  19. Weight loss history as a predictor of weight loss: results from Phase I of the weight loss maintenance trial

    OpenAIRE

    Myers, Valerie H.; McVay, Megan A.; Champagne, Catherine M.; Hollis, Jack F.; Coughlin, Janelle W.; Funk, Kristine L.; Gullion, Christina M.; Jerome, Gerald J.; Loria, Catherine M.; Samuel-Hodge, Carmen D; Stevens, Victor J; Svetkey, Laura P; Brantley, Phillip J.

    2012-01-01

    Past studies have suggested that weight loss history is associated with subsequent weight loss. However, questions remain whether method and amount of weight lost in previous attempts impacts current weight loss efforts. This study utilized data from the Weight Loss Maintenance Trial to examine the association between weight loss history and weight loss outcomes in a diverse sample of high-risk individuals. Multivariate regression analysis was conducted to determine which specific aspects of ...

  20. 加权Logistic回归模型在斑岩铜矿预测中的应用——以中—哈边境扎尔—玛萨吾尔成矿带为例%The Application of Weighted Logistic Regression Model in Prediction of Porphyry Copper Deposit——take Zharma-Sawur metallogenic belt, China-Kazakhstan border area, as an example

    Institute of Scientific and Technical Information of China (English)

    努丽曼古·阿不都克力木; 张晓帆; 陈川; 徐仕琪; 赵同阳

    2012-01-01

    加权Logistic回归是基于GIS成矿预测的主要方法之一,其模型是不同于线性模型的一种类型.它具有强大的空间分析功能、适用性强、不受任何独立条件的约束、预测结果更可靠,因此在矿产资源评价研究中得到了很多地质学家的青睐.以矿床模型和成矿理论为基础,加权Logistic回归分析模型在成矿预测中的应用主要包括三部分:加权Logistic回归模型的建立及其应用、成矿有利度综合评价、成矿远景区圈定.本文以中国—哈萨克斯坦边境地区扎尔玛—萨吾尔成矿带斑岩型铜矿为例,探讨了基于GIS的加权Logistic回归模型在成矿预测中的应用.%Weighted Logistic Regression is one of the main methods of mineral potential mapping. It is different from linear model. Because of its powerful spatial analysis function, strong adaptability, unconstrained by independent conditions, and more reliable prediction results, Weighted Logistic Regression is widely used by many geologists in mineral resources assessment. Based on the mineral deposit model and theory, Weighted Logistic Regression is consists of three parts: (1) Establishment of weighted logistic regression model for mineral potential mapping; (2 ) comprehensive evaluation of favorable degrees; (3 ) mineral potential mapping of study area. By the Weighted Logistic Regression model for mineral potential mapping, Zharma-Sawur Metallogic Belt which across border region of China and Kazakhstan is studied and mineral prospecting area of porphyry copper deposit is mapped. At the end, the availability of Weighted Logistic Regression Model for mineral potential mapping is discussed.

  1. Bases chimiosensorielles du comportement alimentaire chez les poissons

    Directory of Open Access Journals (Sweden)

    SAGLIO Ph.

    1981-07-01

    Full Text Available Le comportement alimentaire, indispensable à la survie de l'individu et donc de l'espèce, occupe à ce titre une position de première importance dans la hiérarchie des comportements fondamentaux qui tous en dépendent très étroitement. Chez les poissons, cette prééminence se trouve illustrée par l'extrême diversité des supports sensoriels impliqués et des expressions comportementales qui leur sont liées. A la suite d'un certain nombre de mises en évidence neurophysiologiques et éthologiques de l'importance du sens chimique (olfaction, gustation dans le comportement alimentaire des poissons, de très importants secteurs d'études électrophysiologiques et d'analyses physico-chimiques visant à en déterminer la nature exacte (en termes de substances actives se sont développés ces vingt dernières années. De tous ces travaux dont les plus avancés sont présentés ici, il ressort que les acides aminés de série L plus ou moins associés à d'autres composés de poids moléculaires < 1000 constituent des composés chimiques jouant un rôle déterminant dans le comportement alimentaire de nombreuses espèces de poissons carnivores.

  2. Polarizable Atomic Multipole Solutes in a Poisson-Boltzmann Continuum

    Science.gov (United States)

    Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.

    2008-01-01

    Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager used vacuum properties of small molecules, including polarizability, dipole moment and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation (PBE). Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here we describe the theory underlying a newly developed Polarizable Multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150 mM salt lowered the electrostatic solvation energy between 2–13 kcal/mole, depending on the formal charge of the protein, but had only a

  3. Maslov indices, Poisson brackets, and singular differential forms

    Science.gov (United States)

    Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.

    2014-06-01

    Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.

  4. Energy flow lines and the spot of Poisson-Arago

    CERN Document Server

    Gondran, Michel

    2009-01-01

    We show how energy flow lines answer the question about diffraction phenomena presented in 1818 by the French Academy: "deduce by mathematical induction, the movements of the rays during their crossing near the bodies". This provides a complementary answer to Fresnel's wave theory of light. A numerical simulation of these energy flow lines proves that they can reach the bright spot of Poisson-Arago in the shadow center of a circular opaque disc. For a monochromatic wave in vacuum, these energy flow lines correspond to the diffracted rays of Newton's Opticks.

  5. Gap processing for adaptive maximal Poisson-disk sampling

    KAUST Repository

    Yan, Dongming

    2013-09-01

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  6. Monitoring Poisson time series using multi-process models

    DEFF Research Database (Denmark)

    Engebjerg, Malene Dahl Skov; Lundbye-Christensen, Søren; Kjær, Birgitte B.

    Surveillance of infectious diseases based on routinely collected public health data is important for at least three reasons: The early detection of an epidemic may facilitate prompt interventions and the seasonal variations and long term trend may be of general epidemiological interest. Furthermore...... aspects of health resource management may also be addressed. In this paper we center on the detection of outbreaks of infectious diseases. This is achieved by a multi-process Poisson state space model taking autocorrelation and overdispersion into account, which has been applied to a data set concerning...

  7. Homogeneous Poisson Structures on Loop Spaces of Symmetric Spaces

    Directory of Open Access Journals (Sweden)

    Doug Pickrell

    2008-10-01

    Full Text Available This paper is a sequel to [Caine A., Pickrell D., Int. Math. Res. Not., to appear, arXiv:0710.4484], where we studied the Hamiltonian systems which arise from the Evens-Lu construction of homogeneous Poisson structures on both compact and noncompact type symmetric spaces. In this paper we consider loop space analogues. Many of the results extend in a relatively routine way to the loop space setting, but new issues emerge. The main point of this paper is to spell out the meaning of the results, especially in the SU(2 case. Applications include integral formulas and factorizations for Toeplitz determinants.

  8. Theoretical analysis of radiographic images by nonstationary Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, K.; Uchida, S. (Gifu Univ. (Japan)); Yamada, I.

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  9. Finite Horizon Decision Timing with Partially Observable Poisson Processes

    CERN Document Server

    Ludkovski, Michael

    2011-01-01

    We study decision timing problems on finite horizon with Poissonian information arrivals. In our model, a decision maker wishes to optimally time her action in order to maximize her expected reward. The reward depends on an unobservable Markovian environment, and information about the environment is collected through a (compound) Poisson observation process. Examples of such systems arise in investment timing, reliability theory, Bayesian regime detection and technology adoption models. We solve the problem by studying an optimal stopping problem for a piecewise-deterministic process which gives the posterior likelihoods of the unobservable environment. Our method lends itself to simple numerical implementation and we present several illustrative numerical examples.

  10. Concave Majorants of Random Walks and Related Poisson Processes

    CERN Document Server

    Abramson, Josh

    2010-01-01

    We offer a unified approach to the theory of concave majorants of random walks by providing a path transformation for a walk of finite length that leaves the law of the walk unchanged whilst providing complete information about the concave majorant. This leads to a description of a walk of random geometric length as a Poisson point process of excursions away from its concave majorant, which is then used to find a complete description of the concave majorant for a walk of infinite length. In the case where subsets of increments may have the same arithmetic mean, we investigate three nested compositions that naturally arise from our construction of the concave majorant.

  11. Theoretical Analysis of Radiographic Images by Nonstationary Poisson Processes

    Science.gov (United States)

    Tanaka, Kazuo; Yamada, Isao; Uchida, Suguru

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples of the one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  12. Comparing Poisson Sigma Model with A-model

    CERN Document Server

    Bonechi, Francesco; Iraso, Riccardo

    2016-01-01

    We discuss the A-model as a gauge fixing of the Poisson Sigma Model with target a symplectic structure. We complete the discussion in [arXiv:0706.3164], where a gauge fixing defined by a compatible complex structure was introduced, by showing how to recover the A-model hierarchy of observables in terms of the AKSZ observables. Moreover, we discuss the off-shell supersymmetry of the A-model as a residual BV symmetry of the gauge-fixed PSM action.

  13. On space enrichment estimator for nonlinear Poisson-Boltzmann

    Science.gov (United States)

    Randrianarivony, Maharavo

    2013-10-01

    We consider the mathematical aspect of the nonlinear Poisson-Boltzmann equation which physically governs the ionic interaction between solute and solvent media. The presented a-posteriori estimates can be computed locally in a very efficient manner. The a-posteriori error is based upon hierarchical space enrichment which ensures its efficiency and reliability. A brief survey of the solving of the nonlinear system resulting from the FEM discretization is reported. To corroborate the analysis, we report on a few numerical results for illustrations. We numerically examine some values of the constants encountered in the theoretical study.

  14. Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.

    Science.gov (United States)

    Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2013-01-01

    Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions.

  15. Fission meter and neutron detection using poisson distribution comparison

    Energy Technology Data Exchange (ETDEWEB)

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  16. Weight Management

    Science.gov (United States)

    ... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...

  17. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

    Science.gov (United States)

    Zhang, Ying; Bi, Peng; Hiller, Janet

    2008-01-01

    This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

  18. Caudal Regression Syndrome

    Directory of Open Access Journals (Sweden)

    Karim Hardani*

    2012-05-01

    Full Text Available A 10-month-old baby presented with developmental delay. He had flaccid paralysis on physical examination.An MRI of the spine revealed malformation of the ninth and tenth thoracic vertebral bodies with complete agenesis of the rest of the spine down that level. The thoracic spinal cord ends at the level of the fifth thoracic vertebra with agenesis of the posterior arches of the eighth, ninth and tenth thoracic vertebral bodies. The roots of the cauda equina appear tightened down and backward and ended into a subdermal fibrous fatty tissue at the level of the ninth and tenth thoracic vertebral bodies (closed meningocele. These findings are consistent with caudal regression syndrome.

  19. Deformation Quantization of Poisson Structures Associated to Lie Algebroids

    Directory of Open Access Journals (Sweden)

    Nikolai Neumaier

    2009-09-01

    Full Text Available In the present paper we explicitly construct deformation quantizations of certain Poisson structures on E*, where E → M is a Lie algebroid. Although the considered Poisson structures in general are far from being regular or even symplectic, our construction gets along without Kontsevich's formality theorem but is based on a generalized Fedosov construction. As the whole construction merely uses geometric structures of E we also succeed in determining the dependence of the resulting star products on these data in finding appropriate equivalence transformations between them. Finally, the concreteness of the construction allows to obtain explicit formulas even for a wide class of derivations and self-equivalences of the products. Moreover, we can show that some of our products are in direct relation to the universal enveloping algebra associated to the Lie algebroid. Finally, we show that for a certain class of star products on E* the integration with respect to a density with vanishing modular vector field defines a trace functional.

  20. Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model

    Science.gov (United States)

    Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.

    2014-03-01

    Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.