WorldWideScience

Sample records for poisson regression results

  1. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  2. Bayesian regression of piecewise homogeneous Poisson processes

    Directory of Open Access Journals (Sweden)

    Diego Sevilla

    2015-12-01

    Full Text Available In this paper, a Bayesian method for piecewise regression is adapted to handle counting processes data distributed as Poisson. A numerical code in Mathematica is developed and tested analyzing simulated data. The resulting method is valuable for detecting breaking points in the count rate of time series for Poisson processes. Received: 2 November 2015, Accepted: 27 November 2015; Edited by: R. Dickman; Reviewed by: M. Hutter, Australian National University, Canberra, Australia.; DOI: http://dx.doi.org/10.4279/PIP.070018 Cite as: D J R Sevilla, Papers in Physics 7, 070018 (2015

  3. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  4. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  5. Poisson Regression Analysis of Illness and Injury Surveillance Data

    Energy Technology Data Exchange (ETDEWEB)

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson

  6. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  7. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  8. Background stratified Poisson regression analysis of cohort data

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, David B. [University of North Carolina at Chapel Hill, Department of Epidemiology, School of Public Health, Chapel Hill, NC (United States); Langholz, Bryan [Keck School of Medicine, University of Southern California, Division of Biostatistics, Department of Preventive Medicine, Los Angeles, CA (United States)

    2012-03-15

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  9. Modeling the number of car theft using Poisson regression

    Science.gov (United States)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  10. Collision prediction models using multivariate Poisson-lognormal regression.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.

  11. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  12. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    Science.gov (United States)

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  13. Poisson regression for modeling count and frequency outcomes in trauma research.

    Science.gov (United States)

    Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

    2008-10-01

    The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

  14. Comparison of Efficiency in Generalized Poisson Regression Model and the Standard Poisson Regression Model in analyzing Fertility Behavior among Women, Kashan, 2012

    Directory of Open Access Journals (Sweden)

    Hossein Fallahzadeh

    2017-05-01

    Full Text Available Introduction: Different statistical methods can be used to analyze fertility data. When the response variable is discrete, Poisson model is applied. If the condition does not hold for the Poisson model, its generalized model will be applied. The goal of this study was to compare the efficiency of generalized Poisson regression model with the standard Poisson regression model in estimating the coefficient of effective factors onthe current number of children. Methods: This is a cross-sectional study carried out on a populationof married women within the age range of15-49 years in Kashan, Iran. The cluster sampling method was used for data collection. Clusters consisted ofthe urbanblocksdeterminedby the municipality.Atotal number of10clusters each containing30households was selected according to the health center's framework. The necessary data were then collected through a self-madequestionnaireanddirectinterviewswith women under study. Further, the data analysiswas performed by usingthe standard and generalizedPoisson regression models through theRsoftware. Results: The average number of children for each woman was 1.45 with a variance of 1.073.A significant relationship was observed between the husband's age, number of unwanted pregnancies, and the average durationof breastfeeding with the present number of children in the two standard and generalized Poisson regression models (p < 0.05.The mean ageof women  participating in thisstudy was33.1± 7.57 years (from 25.53 years to 40.67, themean age of marriage was 20.09 ± 3.82 (from16.27 years to23.91, and themean age of their husbands was 37.9 ± 8.4years (from 29.5 years to 46.3. In the current study, the majority of women werein the age range of 30-35years old with the medianof 32years, however, most ofmen were in the age range of 35-40yearswith the median of37years. While 236of women did not have unwanted pregnancies, most participants of the present study had one unwanted pregnancy

  15. Poisson Regresyon Uygulaması: Türkiye'deki Grevlerin Belirleyicileri 1964-1998 = An Application of Poisson Regression to the Strikes in Turkey: 1964-1998

    Directory of Open Access Journals (Sweden)

    Hasan ŞAHİN

    2002-01-01

    Full Text Available This study applies a Poisson regression model to annual Turkish strikes data of the period of 1964-1998. The Poisson regression model is preferable when the dependent variable is count data. Economical and social variables are used as determinants of the number of strikes. Empirical results show that the unemployment rate and a dummy variable that takes 0 before 1980 1 otherwise are significantly affects the number of strikes.

  16. Development of planning level transportation safety tools using Geographically Weighted Poisson Regression.

    Science.gov (United States)

    Hadayeghi, Alireza; Shalaby, Amer S; Persaud, Bhagwant N

    2010-03-01

    A common technique used for the calibration of collision prediction models is the Generalized Linear Modeling (GLM) procedure with the assumption of Negative Binomial or Poisson error distribution. In this technique, fixed coefficients that represent the average relationship between the dependent variable and each explanatory variable are estimated. However, the stationary relationship assumed may hide some important spatial factors of the number of collisions at a particular traffic analysis zone. Consequently, the accuracy of such models for explaining the relationship between the dependent variable and the explanatory variables may be suspected since collision frequency is likely influenced by many spatially defined factors such as land use, demographic characteristics, and traffic volume patterns. The primary objective of this study is to investigate the spatial variations in the relationship between the number of zonal collisions and potential transportation planning predictors, using the Geographically Weighted Poisson Regression modeling technique. The secondary objective is to build on knowledge comparing the accuracy of Geographically Weighted Poisson Regression models to that of Generalized Linear Models. The results show that the Geographically Weighted Poisson Regression models are useful for capturing spatially dependent relationships and generally perform better than the conventional Generalized Linear Models. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. A poisson regression approach for modelling spatial autocorrelation between geographically referenced observations

    Directory of Open Access Journals (Sweden)

    Jolley Damien

    2011-10-01

    Full Text Available Abstract Background Analytic methods commonly used in epidemiology do not account for spatial correlation between observations. In regression analyses, omission of that autocorrelation can bias parameter estimates and yield incorrect standard error estimates. Methods We used age standardised incidence ratios (SIRs of esophageal cancer (EC from the Babol cancer registry from 2001 to 2005, and extracted socioeconomic indices from the Statistical Centre of Iran. The following models for SIR were used: (1 Poisson regression with agglomeration-specific nonspatial random effects; (2 Poisson regression with agglomeration-specific spatial random effects. Distance-based and neighbourhood-based autocorrelation structures were used for defining the spatial random effects and a pseudolikelihood approach was applied to estimate model parameters. The Bayesian information criterion (BIC, Akaike's information criterion (AIC and adjusted pseudo R2, were used for model comparison. Results A Gaussian semivariogram with an effective range of 225 km best fit spatial autocorrelation in agglomeration-level EC incidence. The Moran's I index was greater than its expected value indicating systematic geographical clustering of EC. The distance-based and neighbourhood-based Poisson regression estimates were generally similar. When residual spatial dependence was modelled, point and interval estimates of covariate effects were different to those obtained from the nonspatial Poisson model. Conclusions The spatial pattern evident in the EC SIR and the observation that point estimates and standard errors differed depending on the modelling approach indicate the importance of accounting for residual spatial correlation in analyses of EC incidence in the Caspian region of Iran. Our results also illustrate that spatial smoothing must be applied with care.

  18. A Poisson regression approach for modelling spatial autocorrelation between geographically referenced observations.

    Science.gov (United States)

    Mohebbi, Mohammadreza; Wolfe, Rory; Jolley, Damien

    2011-10-03

    Analytic methods commonly used in epidemiology do not account for spatial correlation between observations. In regression analyses, omission of that autocorrelation can bias parameter estimates and yield incorrect standard error estimates. We used age standardised incidence ratios (SIRs) of esophageal cancer (EC) from the Babol cancer registry from 2001 to 2005, and extracted socioeconomic indices from the Statistical Centre of Iran. The following models for SIR were used: (1) Poisson regression with agglomeration-specific nonspatial random effects; (2) Poisson regression with agglomeration-specific spatial random effects. Distance-based and neighbourhood-based autocorrelation structures were used for defining the spatial random effects and a pseudolikelihood approach was applied to estimate model parameters. The Bayesian information criterion (BIC), Akaike's information criterion (AIC) and adjusted pseudo R2, were used for model comparison. A Gaussian semivariogram with an effective range of 225 km best fit spatial autocorrelation in agglomeration-level EC incidence. The Moran's I index was greater than its expected value indicating systematic geographical clustering of EC. The distance-based and neighbourhood-based Poisson regression estimates were generally similar. When residual spatial dependence was modelled, point and interval estimates of covariate effects were different to those obtained from the nonspatial Poisson model. The spatial pattern evident in the EC SIR and the observation that point estimates and standard errors differed depending on the modelling approach indicate the importance of accounting for residual spatial correlation in analyses of EC incidence in the Caspian region of Iran. Our results also illustrate that spatial smoothing must be applied with care.

  19. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  20. Analysing count data of Butterflies communities in Jasin, Melaka: A Poisson regression analysis

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah

    2017-09-01

    Counting outcomes normally have remaining values highly skewed toward the right as they are often characterized by large values of zeros. The data of butterfly communities, had been taken from Jasin, Melaka and consists of 131 number of subject visits in Jasin, Melaka. In this paper, considering the count data of butterfly communities, an analysis is considered Poisson regression analysis as it is assumed to be an alternative way on better suited to the counting process. This research paper is about analysing count data from zero observation ecological inference of butterfly communities in Jasin, Melaka by using Poisson regression analysis. The software for Poisson regression is readily available and it is becoming more widely used in many field of research and the data was analysed by using SAS software. The purpose of analysis comprised the framework of identifying the concerns. Besides, by using Poisson regression analysis, the study determines the fitness of data for accessing the reliability on using the count data. The finding indicates that the highest and lowest number of subject comes from the third family (Nymphalidae) family and fifth (Hesperidae) family and the Poisson distribution seems to fit the zero values.

  1. Detecting overdispersion in count data: A zero-inflated Poisson regression analysis

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah

    2017-09-01

    This study focusing on analysing count data of butterflies communities in Jasin, Melaka. In analysing count dependent variable, the Poisson regression model has been known as a benchmark model for regression analysis. Continuing from the previous literature that used Poisson regression analysis, this study comprising the used of zero-inflated Poisson (ZIP) regression analysis to gain acute precision on analysing the count data of butterfly communities in Jasin, Melaka. On the other hands, Poisson regression should be abandoned in the favour of count data models, which are capable of taking into account the extra zeros explicitly. By far, one of the most popular models include ZIP regression model. The data of butterfly communities which had been called as the number of subjects in this study had been taken in Jasin, Melaka and consisted of 131 number of subjects visits Jasin, Melaka. Since the researchers are considering the number of subjects, this data set consists of five families of butterfly and represent the five variables involve in the analysis which are the types of subjects. Besides, the analysis of ZIP used the SAS procedure of overdispersion in analysing zeros value and the main purpose of continuing the previous study is to compare which models would be better than when exists zero values for the observation of the count data. The analysis used AIC, BIC and Voung test of 5% level significance in order to achieve the objectives. The finding indicates that there is a presence of over-dispersion in analysing zero value. The ZIP regression model is better than Poisson regression model when zero values exist.

  2. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  3. Poisson regression approach for modeling fatal injury rates amongst Malaysian workers

    International Nuclear Information System (INIS)

    Kamarulzaman Ibrahim; Heng Khai Theng

    2005-01-01

    Many safety studies are based on the analysis carried out on injury surveillance data. The injury surveillance data gathered for the analysis include information on number of employees at risk of injury in each of several strata where the strata are defined in terms of a series of important predictor variables. Further insight into the relationship between fatal injury rates and predictor variables may be obtained by the poisson regression approach. Poisson regression is widely used in analyzing count data. In this study, poisson regression is used to model the relationship between fatal injury rates and predictor variables which are year (1995-2002), gender, recording system and industry type. Data for the analysis were obtained from PERKESO and Jabatan Perangkaan Malaysia. It is found that the assumption that the data follow poisson distribution has been violated. After correction for the problem of over dispersion, the predictor variables that are found to be significant in the model are gender, system of recording, industry type, two interaction effects (interaction between recording system and industry type and between year and industry type). Introduction Regression analysis is one of the most popular

  4. Exploring factors associated with traumatic dental injuries in preschool children: a Poisson regression analysis.

    Science.gov (United States)

    Feldens, Carlos Alberto; Kramer, Paulo Floriani; Ferreira, Simone Helena; Spiguel, Mônica Hermann; Marquezan, Marcela

    2010-04-01

    This cross-sectional study aimed to investigate the factors associated with dental trauma in preschool children using Poisson regression analysis with robust variance. The study population comprised 888 children aged 3- to 5-year-old attending public nurseries in Canoas, southern Brazil. Questionnaires assessing information related to the independent variables (age, gender, race, mother's educational level and family income) were completed by the parents. Clinical examinations were carried out by five trained examiners in order to assess traumatic dental injuries (TDI) according to Andreasen's classification. One of the five examiners was calibrated to assess orthodontic characteristics (open bite and overjet). Multivariable Poisson regression analysis with robust variance was used to determine the factors associated with dental trauma as well as the strengths of association. Traditional logistic regression was also performed in order to compare the estimates obtained by both methods of statistical analysis. 36.4% (323/888) of the children suffered dental trauma and there was no difference in prevalence rates from 3 to 5 years of age. Poisson regression analysis showed that the probability of the outcome was almost 30% higher for children whose mothers had more than 8 years of education (Prevalence Ratio = 1.28; 95% CI = 1.03-1.60) and 63% higher for children with an overjet greater than 2 mm (Prevalence Ratio = 1.63; 95% CI = 1.31-2.03). Odds ratios clearly overestimated the size of the effect when compared with prevalence ratios. These findings indicate the need for preventive orientation regarding TDI, in order to educate parents and caregivers about supervising infants, particularly those with increased overjet and whose mothers have a higher level of education. Poisson regression with robust variance represents a better alternative than logistic regression to estimate the risk of dental trauma in preschool children.

  5. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    Science.gov (United States)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  6. Zero inflated Poisson and negative binomial regression models: application in education.

    Science.gov (United States)

    Salehi, Masoud; Roudbari, Masoud

    2015-01-01

    The number of failed courses and semesters in students are indicators of their performance. These amounts have zero inflated (ZI) distributions. Using ZI Poisson and negative binomial distributions we can model these count data to find the associated factors and estimate the parameters. This study aims at to investigate the important factors related to the educational performance of students. This cross-sectional study performed in 2008-2009 at Iran University of Medical Sciences (IUMS) with a population of almost 6000 students, 670 students selected using stratified random sampling. The educational and demographical data were collected using the University records. The study design was approved at IUMS and the students' data kept confidential. The descriptive statistics and ZI Poisson and negative binomial regressions were used to analyze the data. The data were analyzed using STATA. In the number of failed semesters, Poisson and negative binomial distributions with ZI, students' total average and quota system had the most roles. For the number of failed courses, total average, and being in undergraduate or master levels had the most effect in both models. In all models the total average have the most effect on the number of failed courses or semesters. The next important factor is quota system in failed semester and undergraduate and master levels in failed courses. Therefore, average has an important inverse effect on the numbers of failed courses and semester.

  7. Use of Poisson spatiotemporal regression models for the Brazilian Amazon Forest: malaria count data.

    Science.gov (United States)

    Achcar, Jorge Alberto; Martinez, Edson Zangiacomi; Souza, Aparecida Doniseti Pires de; Tachibana, Vilma Mayumi; Flores, Edilson Ferreira

    2011-01-01

    Malaria is a serious problem in the Brazilian Amazon region, and the detection of possible risk factors could be of great interest for public health authorities. The objective of this article was to investigate the association between environmental variables and the yearly registers of malaria in the Amazon region using bayesian spatiotemporal methods. We used Poisson spatiotemporal regression models to analyze the Brazilian Amazon forest malaria count for the period from 1999 to 2008. In this study, we included some covariates that could be important in the yearly prediction of malaria, such as deforestation rate. We obtained the inferences using a bayesian approach and Markov Chain Monte Carlo (MCMC) methods to simulate samples for the joint posterior distribution of interest. The discrimination of different models was also discussed. The model proposed here suggests that deforestation rate, the number of inhabitants per km², and the human development index (HDI) are important in the prediction of malaria cases. It is possible to conclude that human development, population growth, deforestation, and their associated ecological alterations are conducive to increasing malaria risk. We conclude that the use of Poisson regression models that capture the spatial and temporal effects under the bayesian paradigm is a good strategy for modeling malaria counts.

  8. A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.

    Science.gov (United States)

    Li, Chin-Shang; Tu, Wanzhu

    2007-05-01

    In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.

  9. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    Directory of Open Access Journals (Sweden)

    Rodrigues-Motta Mariana

    2008-07-01

    Full Text Available Abstract Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep.

  10. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    Precise estimates of disease transmission rates are critical for epidemiological simulation models. Most often these rates must be estimated from longitudinal field data, which are costly and time-consuming to conduct. Consequently, measures to reduce cost like increased sampling intervals...... or subsampling of the population are implemented. To assess the impact of such measures we implement two different SIS models to simulate disease transmission: A simple closed population model and a realistic dairy herd including population dynamics. We analyze the accuracy of different methods for estimating...... the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...

  11. A coregionalization model can assist specification of Geographically Weighted Poisson Regression: Application to an ecological study.

    Science.gov (United States)

    Ribeiro, Manuel Castro; Sousa, António Jorge; Pereira, Maria João

    2016-05-01

    The geographical distribution of health outcomes is influenced by socio-economic and environmental factors operating on different spatial scales. Geographical variations in relationships can be revealed with semi-parametric Geographically Weighted Poisson Regression (sGWPR), a model that can combine both geographically varying and geographically constant parameters. To decide whether a parameter should vary geographically, two models are compared: one in which all parameters are allowed to vary geographically and one in which all except the parameter being evaluated are allowed to vary geographically. The model with the lower corrected Akaike Information Criterion (AICc) is selected. Delivering model selection exclusively according to the AICc might hide important details in spatial variations of associations. We propose assisting the decision by using a Linear Model of Coregionalization (LMC). Here we show how LMC can refine sGWPR on ecological associations between socio-economic and environmental variables and low birth weight outcomes in the west-north-central region of Portugal. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Gráficos de controle baseado nos resíduos do modelo de regressão Poisson

    OpenAIRE

    Russo, Suzana; Camargo, Maria Emilia; Samohyl, Robert Wayne

    2008-01-01

    Gráficos de controle baseado nos resíduos de Poisson têm sido têm sido úteis para monitorar o número de não conformidade em um processo industrial. O modelo de regressão de Poisson é o mais popular dos modelos lineares generalizados, o qual é usado para modelar dados de contagem. O modelo de regressão de Poisson tem uma suposição de que a variância é igual á media, mas nem sempre isso acontece, em muitas situações tem-se encontrado que a variância é maior do que a média, e este fenômeno é den...

  13. Poisson regression analysis of the mortality among a cohort of World War II nuclear industry workers

    International Nuclear Information System (INIS)

    Frome, E.L.; Cragle, D.L.; McLain, R.W.

    1990-01-01

    A historical cohort mortality study was conducted among 28,008 white male employees who had worked for at least 1 month in Oak Ridge, Tennessee, during World War II. The workers were employed at two plants that were producing enriched uranium and a research and development laboratory. Vital status was ascertained through 1980 for 98.1% of the cohort members and death certificates were obtained for 96.8% of the 11,671 decedents. A modified version of the traditional standardized mortality ratio (SMR) analysis was used to compare the cause-specific mortality experience of the World War II workers with the U.S. white male population. An SMR and a trend statistic were computed for each cause-of-death category for the 30-year interval from 1950 to 1980. The SMR for all causes was 1.11, and there was a significant upward trend of 0.74% per year. The excess mortality was primarily due to lung cancer and diseases of the respiratory system. Poisson regression methods were used to evaluate the influence of duration of employment, facility of employment, socioeconomic status, birth year, period of follow-up, and radiation exposure on cause-specific mortality. Maximum likelihood estimates of the parameters in a main-effects model were obtained to describe the joint effects of these six factors on cause-specific mortality of the World War II workers. We show that these multivariate regression techniques provide a useful extension of conventional SMR analysis and illustrate their effective use in a large occupational cohort study

  14. Using poisson regression to compare rates of unsatisfactory pap smears among gynecologists and to evaluate a performance improvement plan.

    Science.gov (United States)

    Wachtel, Mitchell S; Hatley, Warren G; de Riese, Cornelia

    2009-01-01

    To evaluate impact of a performance improvement (PI) plan implemented after initial analysis, comparing 7 gynecologists working in 2 clinics. From January to October 2005, unsatisfactory rates for gynecologists and clinics were calculated. A PI plan was instituted at the end of the first quarter of 2006. Unsatisfactory rates for each quarter of 2006 and the first quarter of 2007 were calculated. Poisson regression analyzed results. A total of 100 ThinPrep Pap smears initially deemed unsatisfactory underwent reprocessing and revaluation. The study's first part evaluated 2890 smears. Clinic unsatisfactory rates, 2.7% and 2.6%, were similar (p > 0.05). Gynecologists' unsatisfactory rates were 4.8-0.6%; differences between each of the highest 2 and lowest rates were significant (p improvement. Reprocessing ThinPrep smears is an important means of reducing unsatisfactory rates but should not be a substitute for attention to quality in sampling.

  15. Gráficos de controle baseado nos resíduos do modelo de regressão Poisson

    Directory of Open Access Journals (Sweden)

    Suzana Russo

    2008-11-01

    Full Text Available Gráficos de controle baseado nos resíduos de Poisson têm sido têm sido úteis para monitorar o número de não conformidade em um processo industrial. O modelo de regressão de Poisson é o mais popular dos modelos lineares generalizados, o qual é usado para modelar dados de contagem. O modelo de regressão de Poisson tem uma suposição de que a variância é igual á media, mas nem sempre isso acontece, em muitas situações tem-se encontrado que a variância é maior do que a média, e este fenômeno é denominado como superdispersão. Os dados usados nesse estudo são números de não conformidades da seção de tecelagem da Indústria Têxtil Oeste Ltda. Observou-se que esses dados têm uma grande variabilidade e possuem superdispersão. Assim, foi preciso aplicar os modelos de Regressão Poisson antes da utilização das técnicas dos gráficos de controle.

  16. Are all quantitative postmarketing signal detection methods equal? Performance characteristics of logistic regression and Multi-item Gamma Poisson Shrinker.

    Science.gov (United States)

    Berlin, Conny; Blanch, Carles; Lewis, David J; Maladorno, Dionigi D; Michel, Christiane; Petrin, Michael; Sarp, Severine; Close, Philippe

    2012-06-01

    The detection of safety signals with medicines is an essential activity to protect public health. Despite widespread acceptance, it is unclear whether recently applied statistical algorithms provide enhanced performance characteristics when compared with traditional systems. Novartis has adopted a novel system for automated signal detection on the basis of disproportionality methods within a safety data mining application (Empirica™ Signal System [ESS]). ESS uses two algorithms for routine analyses: empirical Bayes Multi-item Gamma Poisson Shrinker and logistic regression (LR). A model was developed comprising 14 medicines, categorized as "new" or "established." A standard was prepared on the basis of safety findings selected from traditional sources. ESS results were compared with the standard to calculate the positive predictive value (PPV), specificity, and sensitivity. PPVs of the lower one-sided 5% and 0.05% confidence limits of the Bayes geometric mean (EB05) and of the LR odds ratio (LR0005) almost coincided for all the drug-event combinations studied. There was no obvious difference comparing the PPV of the leading Medical Dictionary for Regulatory Activities (MedDRA) terms to the PPV for all terms. The PPV of narrow MedDRA query searches was higher than that for broad searches. The widely used threshold value of EB05 = 2.0 or LR0005 = 2.0 together with more than three spontaneous reports of the drug-event combination produced balanced results for PPV, sensitivity, and specificity. Consequently, performance characteristics were best for leading terms with narrow MedDRA query searches irrespective of applying Multi-item Gamma Poisson Shrinker or LR at a threshold value of 2.0. This research formed the basis for the configuration of ESS for signal detection at Novartis. Copyright © 2011 John Wiley & Sons, Ltd.

  17. FOOD INSECURITY AND EDUCATIONAL ACHIEVEMENT: A MULTI-LEVEL GENERALIZATION OF POISSON REGRESSION

    Directory of Open Access Journals (Sweden)

    Allison Jennifer Ames

    2016-01-01

    Full Text Available This research examined the relationship between food insecurity, the National School Lunch Program (NSLP, and academic achievement in Georgia’s public school system. Georgia is located in the southern U.S. states, where food insecurity has been particularly prevalent. A multilevel Poisson generalized linear model was used to examine the relationship between food insecurity and academic achievement. Findings confirm a strong inverse relationship between food insecurity, as exhibited by participation in the National School Lunch Program, and academic achievement for elementary-age children. The strength of the relationship between food insecurity and academic achievement was different for the younger, elementary-age students (fifth grade than for the older, middle school-age (eighth grade students, a key distinction between this study and other research.

  18. Multilevel poisson regression modelling for determining factors of dengue fever cases in bandung

    Science.gov (United States)

    Arundina, Davila Rubianti; Tantular, Bertho; Pontoh, Resa Septiani

    2017-03-01

    Scralatina or Dengue Fever is a kind of fever caused by serotype virus which Flavivirus genus and be known as Dengue Virus. Dengue Fever caused by Aedes Aegipty Mosquito bites who infected by a dengue virus. The study was conducted in 151 villages in Bandung. Health Analysts believes that there are two factors that affect the dengue cases, Internal factor (individual) and external factor (environment). The data who used in this research is hierarchical data. The method is used for hierarchical data modelling is multilevel method. Which is, the level 1 is village and level 2 is sub-district. According exploration data analysis, the suitable Multilevel Method is Random Intercept Model. Penalized Quasi Likelihood (PQL) approach on multilevel Poisson is a proper analysis to determine factors that affecting dengue cases in the city of Bandung. Clean and Healthy Behavior factor from the village level have an effect on the number of cases of dengue fever in the city of Bandung. Factor from the sub-district level has no effect.

  19. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model.

    Science.gov (United States)

    Islam, Mohammad Mafijul; Alam, Morshed; Tariquzaman, Md; Kabir, Mohammad Alamgir; Pervin, Rokhsona; Begum, Munni; Khan, Md Mobarak Hossain

    2013-01-08

    Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance variable namely mother's education, father's education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh.

  20. Comparing the cancer in Ninawa during three periods (1980-1990, 1991-2000, 2001-2010 using Poisson regression

    Directory of Open Access Journals (Sweden)

    Muzahem Mohammed Yahya AL-Hashimi

    2013-01-01

    Full Text Available Background: Iraq fought three wars in three consecutive decades, Iran-Iraq war (1980-1988, Persian Gulf War in 1991, and the Iraq′s war in 2003. In the nineties of the last century and up to the present time, there have been anecdotal reports of increase in cancer in Ninawa as in all provinces of Iraq, possibly as a result of exposure to depleted uranium used by American troops in the last two wars. This paper deals with cancer incidence in Ninawa, the most importance province in Iraq, where many of her sons were soldiers in the Iraqi army, and they have participated in the wars. Materials and Methods: The data was derived from the Directorate of Health in Ninawa. The data was divided into three sub periods: 1980-1990, 1991-2000, and 2001-2010. The analyses are performed using Poisson regressions. The response variable is the cancer incidence number. Cancer cases, age, sex, and years were considered as the explanatory variables. The logarithm of the population of Ninawa is used as an offset. The aim of this paper is to model the cancer incidence data and estimate the cancer incidence rate ratio (IRR to illustrate the changes that have occurred of incidence cancer in Ninawa in these three periods. Results: There is evidence of a reduction in the cancer IRR in Ninawa in the third period as well as in the second period. Our analyses found that breast cancer remained the first common cancer; while the lung, trachea, and bronchus the second in spite of decreasing as dramatically. Modest increases in incidence of prostate, penis, and other male genitals for the duration of the study period and stability in incidence of colon in the second and third periods. Modest increases in incidence of placenta and metastatic tumors, while the highest increase was in leukemia in the third period relates to the second period but not to the first period. The cancer IRR in men was decreased from more than 33% than those of females in the first period, more than 39

  1. Comparing the cancer in Ninawa during three periods (1980-1990, 1991-2000, 2001-2010) using Poisson regression.

    Science.gov (United States)

    Al-Hashimi, Muzahem Mohammed Yahya; Wang, Xiangjun

    2013-12-01

    Iraq fought three wars in three consecutive decades, Iran-Iraq war (1980-1988), Persian Gulf War in 1991, and the Iraq's war in 2003. In the nineties of the last century and up to the present time, there have been anecdotal reports of increase in cancer in Ninawa as in all provinces of Iraq, possibly as a result of exposure to depleted uranium used by American troops in the last two wars. This paper deals with cancer incidence in Ninawa, the most importance province in Iraq, where many of her sons were soldiers in the Iraqi army, and they have participated in the wars. The data was derived from the Directorate of Health in Ninawa. The data was divided into three sub periods: 1980-1990, 1991-2000, and 2001-2010. The analyses are performed using Poisson regressions. The response variable is the cancer incidence number. Cancer cases, age, sex, and years were considered as the explanatory variables. The logarithm of the population of Ninawa is used as an offset. The aim of this paper is to model the cancer incidence data and estimate the cancer incidence rate ratio (IRR) to illustrate the changes that have occurred of incidence cancer in Ninawa in these three periods. There is evidence of a reduction in the cancer IRR in Ninawa in the third period as well as in the second period. Our analyses found that breast cancer remained the first common cancer; while the lung, trachea, and bronchus the second in spite of decreasing as dramatically. Modest increases in incidence of prostate, penis, and other male genitals for the duration of the study period and stability in incidence of colon in the second and third periods. Modest increases in incidence of placenta and metastatic tumors, while the highest increase was in leukemia in the third period relates to the second period but not to the first period. The cancer IRR in men was decreased from more than 33% than those of females in the first period, more than 39% in the second period, and regressed to 9.56% in the third

  2. Modeling urban coastal flood severity from crowd-sourced flood reports using Poisson regression and Random Forest

    Science.gov (United States)

    Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.

    2018-04-01

    Sea level rise has already caused more frequent and severe coastal flooding and this trend will likely continue. Flood prediction is an essential part of a coastal city's capacity to adapt to and mitigate this growing problem. Complex coastal urban hydrological systems however, do not always lend themselves easily to physically-based flood prediction approaches. This paper presents a method for using a data-driven approach to estimate flood severity in an urban coastal setting using crowd-sourced data, a non-traditional but growing data source, along with environmental observation data. Two data-driven models, Poisson regression and Random Forest regression, are trained to predict the number of flood reports per storm event as a proxy for flood severity, given extensive environmental data (i.e., rainfall, tide, groundwater table level, and wind conditions) as input. The method is demonstrated using data from Norfolk, Virginia USA from September 2010 to October 2016. Quality-controlled, crowd-sourced street flooding reports ranging from 1 to 159 per storm event for 45 storm events are used to train and evaluate the models. Random Forest performed better than Poisson regression at predicting the number of flood reports and had a lower false negative rate. From the Random Forest model, total cumulative rainfall was by far the most dominant input variable in predicting flood severity, followed by low tide and lower low tide. These methods serve as a first step toward using data-driven methods for spatially and temporally detailed coastal urban flood prediction.

  3. The analysis of incontinence episodes and other count data in patients with overactive bladder by Poisson and negative binomial regression.

    Science.gov (United States)

    Martina, R; Kay, R; van Maanen, R; Ridder, A

    2015-01-01

    Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Poisson integrators for Lie-Poisson structures on R3

    International Nuclear Information System (INIS)

    Song Lina

    2011-01-01

    This paper is concerned with the study of Poisson integrators. We are interested in Lie-Poisson systems on R 3 . First, we focus on Poisson integrators for constant Poisson systems and the transformations used for transforming Lie-Poisson structures to constant Poisson structures. Then, we construct local Poisson integrators for Lie-Poisson systems on R 3 . Finally, we present the results of numerical experiments for two Lie-Poisson systems and compare our Poisson integrators with other known methods.

  5. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  6. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    Science.gov (United States)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  7. Comparison of In Vitro Fertilization/Intracytoplasmic Sperm Injection Cycle Outcome in Patients with and without Polycystic Ovary Syndrome: A Modified Poisson Regression Model.

    Science.gov (United States)

    Almasi-Hashiani, Amir; Mansournia, Mohammad Ali; Sepidarkish, Mahdi; Vesali, Samira; Ghaheri, Azadeh; Esmailzadeh, Arezoo; Omani-Samani, Reza

    2018-01-01

    Polycystic ovary syndrome (PCOS) is a frequent condition in reproductive age women with a prevalence rate of 5-10%. This study intends to determine the relationship between PCOS and the outcome of assisted reproductive treatment (ART) in Tehran, Iran. In this historical cohort study, we included 996 infertile women who referred to Royan Institute (Tehran, Iran) between January 2012 and December 2013. PCOS, as the main variable, and other potential confounder variables were gathered. Modified Poisson Regression was used for data analysis. Stata software, version 13 was used for all statistical analyses. Unadjusted analysis showed a significantly lower risk for failure in PCOS cases compared to cases without PCOS [risk ratio (RR): 0.79, 95% confidence intervals (CI): 0.66-0.95, P=0.014]. After adjusting for the confounder variables, there was no difference between risk of non-pregnancy in women with and without PCOS (RR: 0.87, 95% CI: 0.72-1.05, P=0.15). Significant predictors of the ART outcome included the treatment protocol type, numbers of embryos transferred (grades A and AB), numbers of injected ampules, and age. The results obtained from this model showed no difference between patients with and without PCOS according to the risk for non-pregnancy. Therefore, other factors might affect conception in PCOS patients. Copyright© by Royan Institute. All rights reserved.

  8. A study of the dengue epidemic and meteorological factors in Guangzhou, China, by using a zero-inflated Poisson regression model.

    Science.gov (United States)

    Wang, Chenggang; Jiang, Baofa; Fan, Jingchun; Wang, Furong; Liu, Qiyong

    2014-01-01

    The aim of this study is to develop a model that correctly identifies and quantifies the relationship between dengue and meteorological factors in Guangzhou, China. By cross-correlation analysis, meteorological variables and their lag effects were determined. According to the epidemic characteristics of dengue in Guangzhou, those statistically significant variables were modeled by a zero-inflated Poisson regression model. The number of dengue cases and minimum temperature at 1-month lag, along with average relative humidity at 0- to 1-month lag were all positively correlated with the prevalence of dengue fever, whereas wind velocity and temperature in the same month along with rainfall at 2 months' lag showed negative association with dengue incidence. Minimum temperature at 1-month lag and wind velocity in the same month had a greater impact on the dengue epidemic than other variables in Guangzhou.

  9. Método de regressão de Poisson: metodologia para avaliação do impacto da poluição atmosférica na saúde populacional Methodology to assess air pollution impact on the population's health using the Poisson regression method

    Directory of Open Access Journals (Sweden)

    Yara de Souza Tadano

    2009-12-01

    Full Text Available Os modelos estatísticos mais utilizados para avaliar o impacto da poluição atmosférica na saúde populacional são os modelos de regressão, pois são capazes de relacionar uma ou mais variáveis explicativas com uma única variável resposta. O objetivo deste estudo foi apresentar o modelo estatístico de regressão de Poisson dos modelos lineares generalizados. Neste trabalho são apresentadas todas as etapas da avaliação, desde a coleta e a análise dos dados até a verificação do ajuste do modelo escolhido.The most used statistical model to evaluate the relation between air pollution and population's health is regression analysis, as it is able to relate one or more explanatory variables with one response variable. This research aims to present the generalized linear model with Poisson regression. Every assessment step, from data collection and analysis to the verification of the chosen model adjustment, will be presented.

  10. Augmenting Data with Published Results in Bayesian Linear Regression

    Science.gov (United States)

    de Leeuw, Christiaan; Klugkist, Irene

    2012-01-01

    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  11. Reprint of "Modelling the influence of temperature and rainfall on malaria incidence in four endemic provinces of Zambia using semiparametric Poisson regression".

    Science.gov (United States)

    Shimaponda-Mataa, Nzooma M; Tembo-Mwase, Enala; Gebreslasie, Michael; Achia, Thomas N O; Mukaratirwa, Samson

    2017-11-01

    Although malaria morbidity and mortality are greatly reduced globally owing to great control efforts, the disease remains the main contributor. In Zambia, all provinces are malaria endemic. However, the transmission intensities vary mainly depending on environmental factors as they interact with the vectors. Generally in Africa, possibly due to the varying perspectives and methods used, there is variation on the relative importance of malaria risk determinants. In Zambia, the role climatic factors play on malaria case rates has not been determined in combination of space and time using robust methods in modelling. This is critical considering the reversal in malaria reduction after the year 2010 and the variation by transmission zones. Using a geoadditive or structured additive semiparametric Poisson regression model, we determined the influence of climatic factors on malaria incidence in four endemic provinces of Zambia. We demonstrate a strong positive association between malaria incidence and precipitation as well as minimum temperature. The risk of malaria was 95% lower in Lusaka (ARR=0.05, 95% CI=0.04-0.06) and 68% lower in the Western Province (ARR=0.31, 95% CI=0.25-0.41) compared to Luapula Province. North-western Province did not vary from Luapula Province. The effects of geographical region are clearly demonstrated by the unique behaviour and effects of minimum and maximum temperatures in the four provinces. Environmental factors such as landscape in urbanised places may also be playing a role. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A zero-inflated occupancy distribution: exact results and Poisson convergence

    Directory of Open Access Journals (Sweden)

    Ljuben Mutafchiev

    2003-05-01

    Full Text Available We introduce the generalized zero-inflated allocation scheme of placing n labeled balls into N labeled cells. We study the asymptotic behavior of the number of empty cells when (n,N belongs to the “right” and “left” domain of attraction. An application to the estimation of characteristics of agreement among a set of raters which independently classify subjects into one of two categories is also indicated. The case when a large number of raters acts following the zero-inflated binomial law with small probability for positive diagnosis is treated using the zero-inflated Poisson approximation.

  13. Poisson Autoregression

    DEFF Research Database (Denmark)

    Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag

    This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... variance, implying an interpretation as an integer valued GARCH process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time...... series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of the asymptotic covariance, which is used in the simulations and the analysis of some...

  14. Short-Term Effects of Climatic Variables on Hand, Foot, and Mouth Disease in Mainland China, 2008–2013: A Multilevel Spatial Poisson Regression Model Accounting for Overdispersion

    Science.gov (United States)

    Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying

    2016-01-01

    Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic

  15. Poisson Autoregression

    DEFF Research Database (Denmark)

    Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag

    2009-01-01

    In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies to the condi......In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies...... to the conditional variance, making possible interpretation as an integer-valued generalized autoregressive conditional heteroscedasticity process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and past observations. As a particular example, we consider...... ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen...

  16. Poisson Autoregression

    DEFF Research Database (Denmark)

    Fokianos, Konstantinos; Rahbæk, Anders; Tjøstheim, Dag

    This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily...

  17. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr x Holstein F2 population

    Directory of Open Access Journals (Sweden)

    Fabyano Fonseca Silva

    2011-01-01

    Full Text Available Nowadays, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr x Holstein population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable.

  18. Polygraph Test Results Assessment by Regression Analysis Methods

    Directory of Open Access Journals (Sweden)

    K. A. Leontiev

    2014-01-01

    Full Text Available The paper considers a problem of defining the importance of asked questions for the examinee under judicial and psychophysiological polygraph examination by methods of mathematical statistics. It offers the classification algorithm based on the logistic regression as an optimum Bayesian classifier, considering weight coefficients of information for the polygraph-recorded physiological parameters with no condition for independence of the measured signs.Actually, binary classification is executed by results of polygraph examination with preliminary normalization and standardization of primary results, with check of a hypothesis that distribution of obtained data is normal, as well as with calculation of coefficients of linear regression between input values and responses by method of maximum likelihood. Further, the logistic curve divided signs into two classes of the "significant" and "insignificant" type.Efficiency of model is estimated by means of the ROC analysis (Receiver Operator Characteristics. It is shown that necessary minimum sample has to contain results of 45 measurements at least. This approach ensures a reliable result provided that an expert-polygraphologist possesses sufficient qualification and follows testing techniques.

  19. Fractional Poisson Fields and Martingales

    Science.gov (United States)

    Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely

    2018-01-01

    We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.

  20. Fractional Poisson Fields and Martingales

    Science.gov (United States)

    Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely

    2018-02-01

    We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.

  1. Mapping the results of local statistics: Using geographically weighted regression

    Directory of Open Access Journals (Sweden)

    Stephen A. Matthews

    2012-03-01

    Full Text Available BACKGROUND The application of geographically weighted regression (GWR - a local spatial statistical technique used to test for spatial nonstationarity - has grown rapidly in the social, health, and demographic sciences. GWR is a useful exploratory analytical tool that generates a set of location-specific parameter estimates which can be mapped and analysed to provide information on spatial nonstationarity in the relationships between predictors and the outcome variable. OBJECTIVE A major challenge to users of GWR methods is how best to present and synthesize the large number of mappable results, specifically the local parameter parameter estimates and local t-values, generated from local GWR models. We offer an elegant solution. METHODS This paper introduces a mapping technique to simultaneously display local parameter estimates and local t-values on one map based on the use of data selection and transparency techniques. We integrate GWR software and GIS software package (ArcGIS and adapt earlier work in cartography on bivariate mapping. We compare traditional mapping strategies (i.e., side-by-side comparison and isoline overlay maps with our method using an illustration focusing on US county infant mortality data. CONCLUSIONS The resultant map design is more elegant than methods used to date. This type of map presentation can facilitate the exploration and interpretation of nonstationarity, focusing map reader attention on the areas of primary interest.

  2. Two SPSS programs for interpreting multiple regression results.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo

    2010-02-01

    When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.

  3. Short-Term Effects of Climatic Variables on Hand, Foot, and Mouth Disease in Mainland China, 2008-2013: A Multilevel Spatial Poisson Regression Model Accounting for Overdispersion.

    Science.gov (United States)

    Liao, Jiaqiang; Yu, Shicheng; Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying

    2016-01-01

    Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008-2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse "V" shape and "V" shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across

  4. PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON

    Directory of Open Access Journals (Sweden)

    PUTU SUSAN PRADAWATI

    2013-09-01

    Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.

  5. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    Science.gov (United States)

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  6. The Poisson aggregation process

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2016-01-01

    In this paper we introduce and analyze the Poisson Aggregation Process (PAP): a stochastic model in which a random collection of random balls is stacked over a general metric space. The scattering of the balls’ centers follows a general Poisson process over the metric space, and the balls’ radii are independent and identically distributed random variables governed by a general distribution. For each point of the metric space, the PAP counts the number of balls that are stacked over it. The PAP model is a highly versatile spatial counterpart of the temporal M/G/∞ model in queueing theory. The surface of the moon, scarred by circular meteor-impact craters, exemplifies the PAP model in two dimensions: the PAP counts the number of meteor-impacts that any given moon-surface point sustained. A comprehensive analysis of the PAP is presented, and the closed-form results established include: general statistics, stationary statistics, short-range and long-range dependencies, a Central Limit Theorem, an Extreme Limit Theorem, and fractality.

  7. Homogeneous Poisson structures

    International Nuclear Information System (INIS)

    Shafei Deh Abad, A.; Malek, F.

    1993-09-01

    We provide an algebraic definition for Schouten product and give a decomposition for any homogenenous Poisson structure in any n-dimensional vector space. A large class of n-homogeneous Poisson structures in R k is also characterized. (author). 4 refs

  8. Adding bias to reduce variance in psychological results: A tutorial on penalized regression

    Directory of Open Access Journals (Sweden)

    Helwig, Nathaniel E.

    2017-01-01

    Full Text Available Regression models are commonly used in psychological research. In most studies, regression coefficients are estimated via maximum likelihood (ML estimation. It is well-known that ML estimates have desirable large sample properties, but are prone to overfitting in small to moderate sized samples. In this paper, we discuss the benefits of using penalized regression, which is a form of penalized likelihood (PL estimation. Informally, PL estimation can be understood as introducing bias to estimators for the purpose of reducing their variance, with the ultimate goal of providing better solutions. We focus on the Gaussian regression model, where ML and PL estimation reduce to ordinary least squares (OLS and penalized least squares (PLS estimation, respectively. We cover classic OLS and stepwise regression, as well as three popular penalized regression approaches: ridge regression, the lasso, and the elastic net. We compare the different penalties (or biases imposed by each method, and discuss the resulting features each penalty encourages in the solution. To demonstrate the methods, we use an example where the goal is to predict a student's math exam performance from 30 potential predictors. Using a step-by-step tutorial with R code, we demonstrate how to (i load and prepare the data for analysis, (ii fit the OLS, stepwise, ridge, lasso, and elastic net models, (iii extract and compare the model fitting results, and (iv evaluate the performance of each method. Our example reveals that penalized regression methods can produce more accurate and more interpretable results than the classic OLS and stepwise regression solutions.

  9. Quasi-Poisson versus negative binomial regression models in identifying factors affecting initial CD4 cell count change due to antiretroviral therapy administered to HIV-positive adults in North-West Ethiopia (Amhara region).

    Science.gov (United States)

    Seyoum, Awoke; Ndlovu, Principal; Zewotir, Temesgen

    2016-01-01

    CD4 cells are a type of white blood cells that plays a significant role in protecting humans from infectious diseases. Lack of information on associated factors on CD4 cell count reduction is an obstacle for improvement of cells in HIV positive adults. Therefore, the main objective of this study was to investigate baseline factors that could affect initial CD4 cell count change after highly active antiretroviral therapy had been given to adult patients in North West Ethiopia. A retrospective cross-sectional study was conducted among 792 HIV positive adult patients who already started antiretroviral therapy for 1 month of therapy. A Chi square test of association was used to assess of predictor covariates on the variable of interest. Data was secondary source and modeled using generalized linear models, especially Quasi-Poisson regression. The patients' CD4 cell count changed within a month ranged from 0 to 109 cells/mm 3 with a mean of 15.9 cells/mm 3 and standard deviation 18.44 cells/mm 3 . The first month CD4 cell count change was significantly affected by poor adherence to highly active antiretroviral therapy (aRR = 0.506, P value = 2e -16 ), fair adherence (aRR = 0.592, P value = 0.0120), initial CD4 cell count (aRR = 1.0212, P value = 1.54e -15 ), low household income (aRR = 0.63, P value = 0.671e -14 ), middle income (aRR = 0.74, P value = 0.629e -12 ), patients without cell phone (aRR = 0.67, P value = 0.615e -16 ), WHO stage 2 (aRR = 0.91, P value = 0.0078), WHO stage 3 (aRR = 0.91, P value = 0.0058), WHO stage 4 (0876, P value = 0.0214), age (aRR = 0.987, P value = 0.000) and weight (aRR = 1.0216, P value = 3.98e -14 ). Adherence to antiretroviral therapy, initial CD4 cell count, household income, WHO stages, age, weight and owner of cell phone played a major role for the variation of CD4 cell count in our data. Hence, we recommend a close follow-up of patients to adhere the prescribed medication for

  10. Graded geometry and Poisson reduction

    OpenAIRE

    Cattaneo, A S; Zambon, M

    2009-01-01

    The main result of [2] extends the Marsden-Ratiu reduction theorem [4] in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof in [2]. Further, we provide an alternative algebraic proof for the main result. ©2009 American Institute of Physics

  11. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  12. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    Science.gov (United States)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  13. Modifications to POISSON

    International Nuclear Information System (INIS)

    Harwood, L.H.

    1981-01-01

    At MSU we have used the POISSON family of programs extensively for magnetic field calculations. In the presently super-saturated computer situation, reducing the run time for the program is imperative. Thus, a series of modifications have been made to POISSON to speed up convergence. Two of the modifications aim at having the first guess solution as close as possible to the final solution. The other two aim at increasing the convergence rate. In this discussion, a working knowledge of POISSON is assumed. The amount of new code and expected time saving for each modification is discussed

  14. Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches.

    Science.gov (United States)

    Valle, Denis; Lima, Joanna M Tucker; Millar, Justin; Amratia, Punam; Haque, Ubydul

    2015-11-04

    Logistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue. A systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon. A systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.

  15. Scaling the Poisson Distribution

    Science.gov (United States)

    Farnsworth, David L.

    2014-01-01

    We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.

  16. On Poisson Nonlinear Transformations

    Directory of Open Access Journals (Sweden)

    Nasir Ganikhodjaev

    2014-01-01

    Full Text Available We construct the family of Poisson nonlinear transformations defined on the countable sample space of nonnegative integers and investigate their trajectory behavior. We have proved that these nonlinear transformations are regular.

  17. Extended Poisson Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Anum Fatima

    2015-09-01

    Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.

  18. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  19. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    Science.gov (United States)

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  20. Constructions and classifications of projective Poisson varieties

    Science.gov (United States)

    Pym, Brent

    2018-03-01

    This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.

  1. Posterior consistency for Bayesian inverse problems through stability and regression results

    International Nuclear Information System (INIS)

    Vollmer, Sebastian J

    2013-01-01

    In the Bayesian approach, the a priori knowledge about the input of a mathematical model is described via a probability measure. The joint distribution of the unknown input and the data is then conditioned, using Bayes’ formula, giving rise to the posterior distribution on the unknown input. In this setting we prove posterior consistency for nonlinear inverse problems: a sequence of data is considered, with diminishing fluctuations around a single truth and it is then of interest to show that the resulting sequence of posterior measures arising from this sequence of data concentrates around the truth used to generate the data. Posterior consistency justifies the use of the Bayesian approach very much in the same way as error bounds and convergence results for regularization techniques do. As a guiding example, we consider the inverse problem of reconstructing the diffusion coefficient from noisy observations of the solution to an elliptic PDE in divergence form. This problem is approached by splitting the forward operator into the underlying continuum model and a simpler observation operator based on the output of the model. In general, these splittings allow us to conclude posterior consistency provided a deterministic stability result for the underlying inverse problem and a posterior consistency result for the Bayesian regression problem with the push-forward prior. Moreover, we prove posterior consistency for the Bayesian regression problem based on the regularity, the tail behaviour and the small ball probabilities of the prior. (paper)

  2. Seasonally adjusted birth frequencies follow the Poisson distribution.

    Science.gov (United States)

    Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A

    2015-12-15

    Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p variables is significantly improved (p variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.

  3. Paretian Poisson Processes

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  4. Financial analysis and forecasting of the results of small businesses performance based on regression model

    Directory of Open Access Journals (Sweden)

    Svetlana O. Musienko

    2017-03-01

    Full Text Available Objective to develop the economicmathematical model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies. Methods using comparative analysis the article studies the existing approaches to the construction of the company management models. Applying the regression analysis and the least squares method which is widely used for financial management of enterprises in Russia and abroad the author builds a model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies which can be used in the financial analysis and prediction of small enterprisesrsquo performance. Results the article states the need to identify factors affecting the financial management efficiency. The author analyzed scientific research and revealed the lack of comprehensive studies on the methodology for assessing the small enterprisesrsquo management while the methods used for large companies are not always suitable for the task. The systematized approaches of various authors to the formation of regression models describe the influence of certain factors on the company activity. It is revealed that the resulting indicators in the studies were revenue profit or the company relative profitability. The main drawback of most models is the mathematical not economic approach to the definition of the dependent and independent variables. Basing on the analysis it was determined that the most correct is the model of dependence between revenues and total assets of the company using the decimal logarithm. The model was built using data on the activities of the 507 small businesses operating in three spheres of economic activity. Using the presented model it was proved that there is direct dependence between the sales proceeds and the main items of the asset balance as well as differences in the degree of this effect depending on the economic activity of small

  5. Minimum Hellinger distance estimation for k-component poisson mixture with random effects.

    Science.gov (United States)

    Xiang, Liming; Yau, Kelvin K W; Van Hui, Yer; Lee, Andy H

    2008-06-01

    The k-component Poisson regression mixture with random effects is an effective model in describing the heterogeneity for clustered count data arising from several latent subpopulations. However, the residual maximum likelihood estimation (REML) of regression coefficients and variance component parameters tend to be unstable and may result in misleading inferences in the presence of outliers or extreme contamination. In the literature, the minimum Hellinger distance (MHD) estimation has been investigated to obtain robust estimation for finite Poisson mixtures. This article aims to develop a robust MHD estimation approach for k-component Poisson mixtures with normally distributed random effects. By applying the Gaussian quadrature technique to approximate the integrals involved in the marginal distribution, the marginal probability function of the k-component Poisson mixture with random effects can be approximated by the summation of a set of finite Poisson mixtures. Simulation study shows that the MHD estimates perform satisfactorily for data without outlying observation(s), and outperform the REML estimates when data are contaminated. Application to a data set of recurrent urinary tract infections (UTI) with random institution effects demonstrates the practical use of the robust MHD estimation method.

  6. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  7. MMR-Vaccine and Regression in Autism Spectrum Disorders: Negative Results Presented from Japan

    Science.gov (United States)

    Uchiyama, Tokio; Kurosawa, Michiko; Inaba, Yutaka

    2007-01-01

    It has been suggested that the measles, mumps, and rubella vaccine (MMR) is a cause of regressive autism. As MMR was used in Japan only between 1989 and 1993, this time period affords a natural experiment to examine this hypothesis. Data on 904 patients with autism spectrum disorders (ASD) were analyzed. During the period of MMR usage no…

  8. Variances in the projections, resulting from CLIMEX, Boosted Regression Trees and Random Forests techniques

    Science.gov (United States)

    Shabani, Farzin; Kumar, Lalit; Solhjouy-fard, Samaneh

    2017-08-01

    The aim of this study was to have a comparative investigation and evaluation of the capabilities of correlative and mechanistic modeling processes, applied to the projection of future distributions of date palm in novel environments and to establish a method of minimizing uncertainty in the projections of differing techniques. The location of this study on a global scale is in Middle Eastern Countries. We compared the mechanistic model CLIMEX (CL) with the correlative models MaxEnt (MX), Boosted Regression Trees (BRT), and Random Forests (RF) to project current and future distributions of date palm ( Phoenix dactylifera L.). The Global Climate Model (GCM), the CSIRO-Mk3.0 (CS) using the A2 emissions scenario, was selected for making projections. Both indigenous and alien distribution data of the species were utilized in the modeling process. The common areas predicted by MX, BRT, RF, and CL from the CS GCM were extracted and compared to ascertain projection uncertainty levels of each individual technique. The common areas identified by all four modeling techniques were used to produce a map indicating suitable and unsuitable areas for date palm cultivation for Middle Eastern countries, for the present and the year 2100. The four different modeling approaches predict fairly different distributions. Projections from CL were more conservative than from MX. The BRT and RF were the most conservative methods in terms of projections for the current time. The combination of the final CL and MX projections for the present and 2100 provide higher certainty concerning those areas that will become highly suitable for future date palm cultivation. According to the four models, cold, hot, and wet stress, with differences on a regional basis, appears to be the major restrictions on future date palm distribution. The results demonstrate variances in the projections, resulting from different techniques. The assessment and interpretation of model projections requires reservations

  9. Poisson hierarchy of discrete strings

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidou, Theodora, E-mail: ti3@auth.gr [Faculty of Civil Engineering, School of Engineering, Aristotle University of Thessaloniki, 54249, Thessaloniki (Greece); Niemi, Antti J., E-mail: Antti.Niemi@physics.uu.se [Department of Physics and Astronomy, Uppsala University, P.O. Box 803, S-75108, Uppsala (Sweden); Laboratoire de Mathematiques et Physique Theorique CNRS UMR 6083, Fédération Denis Poisson, Université de Tours, Parc de Grandmont, F37200, Tours (France); Department of Physics, Beijing Institute of Technology, Haidian District, Beijing 100081 (China)

    2016-01-28

    The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.

  10. Poisson hierarchy of discrete strings

    International Nuclear Information System (INIS)

    Ioannidou, Theodora; Niemi, Antti J.

    2016-01-01

    The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.

  11. Estimation of Poisson noise in spatial domain

    Science.gov (United States)

    Švihlík, Jan; Fliegel, Karel; Vítek, Stanislav; Kukal, Jaromír.; Krbcová, Zuzana

    2017-09-01

    This paper deals with modeling of astronomical images in the spatial domain. We consider astronomical light images contaminated by the dark current which is modeled by Poisson random process. Dark frame image maps the thermally generated charge of the CCD sensor. In this paper, we solve the problem of an addition of two Poisson random variables. At first, the noise analysis of images obtained from the astronomical camera is performed. It allows estimating parameters of the Poisson probability mass functions in every pixel of the acquired dark frame. Then the resulting distributions of the light image can be found. If the distributions of the light image pixels are identified, then the denoising algorithm can be applied. The performance of the Bayesian approach in the spatial domain is compared with the direct approach based on the method of moments and the dark frame subtraction.

  12. Evaluating the double Poisson generalized linear model.

    Science.gov (United States)

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Affine Poisson Groups and WZW Model

    Directory of Open Access Journals (Sweden)

    Ctirad Klimcík

    2008-01-01

    Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations.

  14. Analysis on Poisson and Gamma spaces

    OpenAIRE

    Kondratiev, Yuri; Silva, Jose Luis; Streit, Ludwig; Us, Georgi

    1999-01-01

    We study the spaces of Poisson, compound Poisson and Gamma noises as special cases of a general approach to non-Gaussian white noise calculus, see \\cite{KSS96}. We use a known unitary isomorphism between Poisson and compound Poisson spaces in order to transport analytic structures from Poisson space to compound Poisson space. Finally we study a Fock type structure of chaos decomposition on Gamma space.

  15. Simulation on Poisson and negative binomial models of count road accident modeling

    Science.gov (United States)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  16. Almost Poisson integration of rigid body systems

    International Nuclear Information System (INIS)

    Austin, M.A.; Krishnaprasad, P.S.; Li-Sheng Wang

    1993-01-01

    In this paper we discuss the numerical integration of Lie-Poisson systems using the mid-point rule. Since such systems result from the reduction of hamiltonian systems with symmetry by lie group actions, we also present examples of reconstruction rules for the full dynamics. A primary motivation is to preserve in the integration process, various conserved quantities of the original dynamics. A main result of this paper is an O(h 3 ) error estimate for the Lie-Poisson structure, where h is the integration step-size. We note that Lie-Poisson systems appear naturally in many areas of physical science and engineering, including theoretical mechanics of fluids and plasmas, satellite dynamics, and polarization dynamics. In the present paper we consider a series of progressively complicated examples related to rigid body systems. We also consider a dissipative example associated to a Lie-Poisson system. The behavior of the mid-point rule and an associated reconstruction rule is numerically explored. 24 refs., 9 figs

  17. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  18. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  19. Current and Predicted Fertility using Poisson Regression Model ...

    African Journals Online (AJOL)

    AJRH Managing Editor

    marriage, modern contraceptive use, paid employment status, marital status, marital duration, education attainment, husbands education attainment, residence, zones, wealth quintiles. Children ever born in the context of this study refers to the number of children a woman previously born alive as at the time of the study.

  20. Unimodularity criteria for Poisson structures on foliated manifolds

    Science.gov (United States)

    Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury

    2018-03-01

    We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.

  1. An application of the Autoregressive Conditional Poisson (ACP) model

    CSIR Research Space (South Africa)

    Holloway, Jennifer P

    2010-11-01

    Full Text Available When modelling count data that comes in the form of a time series, the static Poisson regression and standard time series models are often not appropriate. A current study therefore involves the evaluation of several observation-driven and parameter...

  2. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  3. Poisson Plus Quantification for Digital PCR Systems.

    Science.gov (United States)

    Majumdar, Nivedita; Banerjee, Swapnonil; Pallas, Michael; Wessel, Thomas; Hegerich, Patricia

    2017-08-29

    Digital PCR, a state-of-the-art nucleic acid quantification technique, works by spreading the target material across a large number of partitions. The average number of molecules per partition is estimated using Poisson statistics, and then converted into concentration by dividing by partition volume. In this standard approach, identical partition sizing is assumed. Violations of this assumption result in underestimation of target quantity, when using Poisson modeling, especially at higher concentrations. The Poisson-Plus Model accommodates for this underestimation, if statistics of the volume variation are well characterized. The volume variation was measured on the chip array based QuantStudio 3D Digital PCR System using the ROX fluorescence level as a proxy for effective load volume per through-hole. Monte Carlo simulations demonstrate the efficacy of the proposed correction. Empirical measurement of model parameters characterizing the effective load volume on QuantStudio 3D Digital PCR chips is presented. The model was used to analyze digital PCR experiments and showed improved accuracy in quantification. At the higher concentrations, the modeling must take effective fill volume variation into account to produce accurate estimates. The extent of the difference from the standard to the new modeling is positively correlated to the extent of fill volume variation in the effective load of your reactions.

  4. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  5. Independent production and Poisson distribution

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1994-01-01

    The well-known statement of factorization of inclusive cross-sections in case of independent production of particles (or clusters, jets etc.) and the conclusion of Poisson distribution over their multiplicity arising from it do not follow from the probability theory in any way. Using accurately the theorem of the product of independent probabilities, quite different equations are obtained and no consequences relative to multiplicity distributions are obtained. 11 refs

  6. Regression of microalbuminuria in type 1 diabetic patients: results of a sequential intervention with improved metabolic control and ACE inhibitors.

    Science.gov (United States)

    Vilarrasa, N; Soler, J; Montanya, E

    2005-06-01

    The objective was to evaluate the effect of improved metabolic control and ACE inhibition used sequentially in the treatment of type 1 diabetic patients with microalbuminuria. We studied 44 consecutive type 1 diabetic patients with microalbuminuria not previously treated with ACE inhibitors. Improved metabolic control (optimisation period) was attempted for 6-12 months and patients with persistent microalbuminuria were subsequently treated with ACE inhibitors. Stepwise logistic regression analysis included the variables age, age at diabetes onset, duration of diabetes, HbA1c, initial albumin excretion rate (AER) and mean blood pressure as predictors of final AER. Thirty per cent of patients regressed to normoalbuminuria after the optimisation period, and 58% of them maintained normal AER 4.5+/-1.3 years later (3-7 years). Patients achieving normoalbuminuria had lower baseline AER (53+/-22 vs. 94+/-63 mg/24 h, p=0.012). The initial AER level was the only factor associated with final AER (r=0.58, p=0.021). Thirty patients with persistent microalbuminuria were treated with ACE inhibitors for two years, 35.5% of whom regressed to normal AER. Patients achieving normoalbuminuria after ACE inhibitor treatment had lower baseline AER (55+/-24 vs. 132+/-75 mg/24 h, p=0.03). The initial AER was the sole predictor of final AER (r=0.51, ptherapy resulted in long-term normalisation of AER in 47.4% of patients. The sequential implementation of improved metabolic control and ACE inhibitor therapy had a long-term beneficial effect in type 1 diabetic patients with microalbuminuria. We propose that type 1 diabetic patients with microalbuminuria could benefit from a period of metabolic improvement before the initiation of ACE inhibitor therapy.

  7. Logistic Regression

    Science.gov (United States)

    Grégoire, G.

    2014-12-01

    The logistic regression originally is intended to explain the relationship between the probability of an event and a set of covariables. The model's coefficients can be interpreted via the odds and odds ratio, which are presented in introduction of the chapter. The observations are possibly got individually, then we speak of binary logistic regression. When they are grouped, the logistic regression is said binomial. In our presentation we mainly focus on the binary case. For statistical inference the main tool is the maximum likelihood methodology: we present the Wald, Rao and likelihoods ratio results and their use to compare nested models. The problems we intend to deal with are essentially the same as in multiple linear regression: testing global effect, individual effect, selection of variables to build a model, measure of the fitness of the model, prediction of new values… . The methods are demonstrated on data sets using R. Finally we briefly consider the binomial case and the situation where we are interested in several events, that is the polytomous (multinomial) logistic regression and the particular case of ordinal logistic regression.

  8. Study of non-Hodgkin's lymphoma mortality associated with industrial pollution in Spain, using Poisson models

    Directory of Open Access Journals (Sweden)

    Lope Virginia

    2009-01-01

    Full Text Available Abstract Background Non-Hodgkin's lymphomas (NHLs have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model. Results Only proximity of paper industries to population centres (>2 km could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27. Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of

  9. Impact of performance grading on annual numbers of acute myocardial infarction-associated emergency department visits in Taiwan: Results of segmented regression analysis.

    Science.gov (United States)

    Tzeng, I-Shiang; Liu, Su-Hsun; Chen, Kuan-Fu; Wu, Chin-Chieh; Chen, Jih-Chang

    2016-10-01

    To reduce patient boarding time at the emergency department (ED) and to improve the overall quality of the emergent care system in Taiwan, the Minister of Health and Welfare of Taiwan (MOHW) piloted the Grading Responsible Hospitals for Acute Care (GRHAC) audit program in 2007-2009.The aim of the study was to evaluate the impact of the GRHAC audit program on the identification and management of acute myocardial infarction (AMI)-associated ED visits by describing and comparing the incidence of AMI-associated ED visits before (2003-2007), during (2007-2009), and after (2009-2012) the initial audit program implementation.Using aggregated data from the MOHW of Taiwan, we estimated the annual incidence of AMI-associated ED visits by Poisson regression models. We used segmented regression techniques to evaluate differences in the annual rates and in the year-to-year changes in AMI-associated ED visits between 2003 and 2012. Medical comorbidities such as diabetes mellitus, hyperlipidemia, and hypertensive disease were considered as potential confounders.Overall, the number of AMI-associated patient visits increased from 8130 visits in 2003 to 12,695 visits in 2012 (P-value for trend hospitals' capacity for timely and correctly diagnosing and managing patients presenting with AMI-associated symptoms or signs at the ED.

  10. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    Science.gov (United States)

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine

  11. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    Science.gov (United States)

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  12. How does Poisson kriging compare to the popular BYM model for mapping disease risks?

    Directory of Open Access Journals (Sweden)

    Gebreab Samson

    2008-02-01

    Full Text Available Abstract Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1 it is easier to implement and less CPU intensive, and 2 it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM model and Poisson kriging (point and area-to-area implementations were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1 state of Indiana that consists of 92 counties of fairly similar size and shape, and 2 four states in the Western US (Arizona, California, Nevada and Utah forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models. Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county

  13. A Local Poisson Graphical Model for inferring networks from sequencing data.

    Science.gov (United States)

    Allen, Genevera I; Liu, Zhandong

    2013-09-01

    Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.

  14. Fraktal Regress

    Directory of Open Access Journals (Sweden)

    Igor K. Kochanenko

    2013-01-01

    Full Text Available Procedures of construction of curve regress by criterion of the least fractals, i.e. the greatest probability of the sums of degrees of the least deviations measured intensity from their modelling values are proved. The exponent is defined as fractal dimension of a time number. The difference of results of a well-founded method and a method of the least squares is quantitatively estimated.

  15. Parasites et parasitoses des poissons

    OpenAIRE

    De Kinkelin, Pierre; Morand, Marc; Hedrick, Ronald; Michel, Christian

    2014-01-01

    Cet ouvrage, richement illustré, offre un panorama représentatif des agents parasitaires rencontrés chez les poissons. S'appuyant sur les nouvelles conceptions de la classification phylogénétique, il met l'accent sur les propriétés biologiques, l'épidémiologie et les conséquences cliniques des groupes d'organismes en cause, à la lumière des avancées cognitives permises par les nouveaux outils de la biologie. Il est destiné à un large public, allant du monde de l'aquaculture à ceux de la santé...

  16. Dualizing the Poisson summation formula.

    Science.gov (United States)

    Duffin, R J; Weinberger, H F

    1991-01-01

    If f(x) and g(x) are a Fourier cosine transform pair, then the Poisson summation formula can be written as 2sumfrominfinityn = 1g(n) + g(0) = 2sumfrominfinityn = 1f(n) + f(0). The concepts of linear transformation theory lead to the following dual of this classical relation. Let phi(x) and gamma(x) = phi(1/x)/x have absolutely convergent integrals over the positive real line. Let F(x) = sumfrominfinityn = 1phi(n/x)/x - integralinfinity0phi(t)dt and G(x) = sumfrominfinityn = 1gamma (n/x)/x - integralinfinity0 gamma(t)dt. Then F(x) and G(x) are a Fourier cosine transform pair. We term F(x) the "discrepancy" of phi because it is the error in estimating the integral phi of by its Riemann sum with the constant mesh spacing 1/x. PMID:11607208

  17. Singular reduction of Nambu-Poisson manifolds

    Science.gov (United States)

    Das, Apurba

    The version of Marsden-Ratiu Poisson reduction theorem for Nambu-Poisson manifolds by a regular foliation have been studied by Ibáñez et al. In this paper, we show that this reduction procedure can be extended to the singular case. Under a suitable notion of Hamiltonian flow on the reduced space, we show that a set of Hamiltonians on a Nambu-Poisson manifold can also be reduced.

  18. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    Science.gov (United States)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  20. Diet influenced tooth erosion prevalence in children and adolescents: Results of a meta-analysis and meta-regression

    NARCIS (Netherlands)

    Salas, M.M.; Nascimento, G.G.; Vargas-Ferreira, F.; Tarquinio, S.B.; Huysmans, M.C.D.N.J.M.; Demarco, F.F.

    2015-01-01

    OBJECTIVE: The aim of the present study was to assess the influence of diet in tooth erosion presence in children and adolescents by meta-analysis and meta-regression. DATA: Two reviewers independently performed the selection process and the quality of studies was assessed. SOURCES: Studies

  1. Poisson-Fermi Formulation of Nonlocal Electrostatics in Electrolyte Solutions

    Directory of Open Access Journals (Sweden)

    Liu Jinn-Liang

    2017-10-01

    Full Text Available We present a nonlocal electrostatic formulation of nonuniform ions and water molecules with interstitial voids that uses a Fermi-like distribution to account for steric and correlation efects in electrolyte solutions. The formulation is based on the volume exclusion of hard spheres leading to a steric potential and Maxwell’s displacement field with Yukawa-type interactions resulting in a nonlocal electric potential. The classical Poisson-Boltzmann model fails to describe steric and correlation effects important in a variety of chemical and biological systems, especially in high field or large concentration conditions found in and near binding sites, ion channels, and electrodes. Steric effects and correlations are apparent when we compare nonlocal Poisson-Fermi results to Poisson-Boltzmann calculations in electric double layer and to experimental measurements on the selectivity of potassium channels for K+ over Na+.

  2. Poisson sigma model with branes and hyperelliptic Riemann surfaces

    International Nuclear Information System (INIS)

    Ferrario, Andrea

    2008-01-01

    We derive the explicit form of the superpropagators in the presence of general boundary conditions (coisotropic branes) for the Poisson sigma model. This generalizes the results presented by Cattaneo and Felder [''A path integral approach to the Kontsevich quantization formula,'' Commun. Math. Phys. 212, 591 (2000)] and Cattaneo and Felder ['Coisotropic submanifolds in Poisson geometry and branes in the Poisson sigma model', Lett. Math. Phys. 69, 157 (2004)] for Kontsevich's angle function [Kontsevich, M., 'Deformation quantization of Poisson manifolds I', e-print arXiv:hep.th/0101170] used in the deformation quantization program of Poisson manifolds. The relevant superpropagators for n branes are defined as gauge fixed homotopy operators of a complex of differential forms on n sided polygons P n with particular ''alternating'' boundary conditions. In the presence of more than three branes we use first order Riemann theta functions with odd singular characteristics on the Jacobian variety of a hyperelliptic Riemann surface (canonical setting). In genus g the superpropagators present g zero mode contributions

  3. Associative and Lie deformations of Poisson algebras

    OpenAIRE

    Remm, Elisabeth

    2011-01-01

    Considering a Poisson algebra as a non associative algebra satisfying the Markl-Remm identity, we study deformations of Poisson algebras as deformations of this non associative algebra. This gives a natural interpretation of deformations which preserves the underlying associative structure and we study deformations which preserve the underlying Lie algebra.

  4. Retinal microaneurysm count predicts progression and regression of diabetic retinopathy. Post-hoc results from the DIRECT Programme.

    Science.gov (United States)

    Sjølie, A K; Klein, R; Porta, M; Orchard, T; Fuller, J; Parving, H H; Bilous, R; Aldington, S; Chaturvedi, N

    2011-03-01

    To study the association between baseline retinal microaneurysm score and progression and regression of diabetic retinopathy, and response to treatment with candesartan in people with diabetes. This was a multicenter randomized clinical trial. The progression analysis included 893 patients with Type 1 diabetes and 526 patients with Type 2 diabetes with retinal microaneurysms only at baseline. For regression, 438 with Type 1 and 216 with Type 2 diabetes qualified. Microaneurysms were scored from yearly retinal photographs according to the Early Treatment Diabetic Retinopathy Study (ETDRS) protocol. Retinopathy progression and regression was defined as two or more step change on the ETDRS scale from baseline. Patients were normoalbuminuric, and normotensive with Type 1 and Type 2 diabetes or treated hypertensive with Type 2 diabetes. They were randomized to treatment with candesartan 32 mg daily or placebo and followed for 4.6 years. A higher microaneurysm score at baseline predicted an increased risk of retinopathy progression (HR per microaneurysm score 1.08, P diabetes; HR 1.07, P = 0.0174 in Type 2 diabetes) and reduced the likelihood of regression (HR 0.79, P diabetes; HR 0.85, P = 0.0009 in Type 2 diabetes), all adjusted for baseline variables and treatment. Candesartan reduced the risk of microaneurysm score progression. Microaneurysm counts are important prognostic indicators for worsening of retinopathy, thus microaneurysms are not benign. Treatment with renin-angiotensin system inhibitors is effective in the early stages and may improve mild diabetic retinopathy. Microaneurysm scores may be useful surrogate endpoints in clinical trials. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.

  5. The coupling of Poisson sigma models to topological backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Dario [School of Physics, Korea Institute for Advanced Study,Seoul 02455 (Korea, Republic of)

    2016-12-13

    We extend the coupling to the topological backgrounds, recently worked out for the 2-dimensional BF-model, to the most general Poisson sigma models. The coupling involves the choice of a Casimir function on the target manifold and modifies the BRST transformations. This in turn induces a change in the BRST cohomology of the resulting theory. The observables of the coupled theory are analyzed and their geometrical interpretation is given. We finally couple the theory to 2-dimensional topological gravity: this is the first step to study a topological string theory in propagation on a Poisson manifold. As an application, we show that the gauge-fixed vectorial supersymmetry of the Poisson sigma models has a natural explanation in terms of the theory coupled to topological gravity.

  6. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified......, and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  7. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    2010-01-01

    In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points...... are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  8. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  9. Robust iterative observer for source localization for Poisson equation

    KAUST Repository

    Majeed, Muhammad Usman

    2017-01-05

    Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.

  10. Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)

    DEFF Research Database (Denmark)

    Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis

    We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...

  11. Poisson sampling - The adjusted and unadjusted estimator revisited

    Science.gov (United States)

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  12. The Poisson equation on Klein surfaces

    Directory of Open Access Journals (Sweden)

    Monica Rosiu

    2016-04-01

    Full Text Available We obtain a formula for the solution of the Poisson equation with Dirichlet boundary condition on a region of a Klein surface. This formula reveals the symmetric character of the solution.

  13. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  14. Improving EWMA Plans for Detecting Unusual Increases in Poisson Counts

    Directory of Open Access Journals (Sweden)

    R. S. Sparks

    2009-01-01

    adaptive exponentially weighted moving average (EWMA plan is developed for signalling unusually high incidence when monitoring a time series of nonhomogeneous daily disease counts. A Poisson transitional regression model is used to fit background/expected trend in counts and provides “one-day-ahead” forecasts of the next day's count. Departures of counts from their forecasts are monitored. The paper outlines an approach for improving early outbreak data signals by dynamically adjusting the exponential weights to be efficient at signalling local persistent high side changes. We emphasise outbreak signals in steady-state situations; that is, changes that occur after the EWMA statistic had run through several in-control counts.

  15. Diet influenced tooth erosion prevalence in children and adolescents: Results of a meta-analysis and meta-regression.

    Science.gov (United States)

    Salas, M M S; Nascimento, G G; Vargas-Ferreira, F; Tarquinio, S B C; Huysmans, M C D N J M; Demarco, F F

    2015-08-01

    The aim of the present study was to assess the influence of diet in tooth erosion presence in children and adolescents by meta-analysis and meta-regression. Two reviewers independently performed the selection process and the quality of studies was assessed. Studies published until May 2014 were identified in electronic databases: Pubmed, EBSHost, Scopus, Science direct, Web of Science and Scielo, using keywords. Criteria used included: observational studies, tooth erosion and diet, subject age range 8-19 years old, permanent dentition and index. Meta-analysis was performed and in case of heterogeneity a random-effects model was used. Thirteen studies that fulfilled the inclusion criteria were selected. Higher consumption of carbonated drinks (p=0.001) or acid snacks/sweets (p=0.01 and for acid fruit juices (p=0.03)) increased the odds for tooth erosion, while higher intake of milk (p=0.028) and yogurt (p=0.002) reduced the erosion occurrence. Heterogeneity was observed in soft drinks, confectionary and snacks and acidic fruit juices models. Methodological issues regarding the questionnaires administration and the inclusion of other variables, such as food groups and tooth brushing, explained partially the heterogeneity observed. Some dietary components (carbonated drinks, acid snacks/sweets and natural acidic fruits juice) increased erosion occurrence while milk and yogurt had a protective effect. Methods to assess diet could influence the homogeneity of the studies and should be considered during the study design. The method to assess diet should be carefully considered and well conducted as part of the clinical assessment of tooth erosion, since diet could influence the occurrence of tooth erosion. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. INVESTIGATION OF E-MAIL TRAFFIC BY USING ZERO-INFLATED REGRESSION MODELS

    Directory of Open Access Journals (Sweden)

    Yılmaz KAYA

    2012-06-01

    Full Text Available Based on count data obtained with a value of zero may be greater than anticipated. These types of data sets should be used to analyze by regression methods taking into account zero values. Zero- Inflated Poisson (ZIP, Zero-Inflated negative binomial (ZINB, Poisson Hurdle (PH, negative binomial Hurdle (NBH are more common approaches in modeling more zero value possessing dependent variables than expected. In the present study, the e-mail traffic of Yüzüncü Yıl University in 2009 spring semester was investigated. ZIP and ZINB, PH and NBH regression methods were applied on the data set because more zeros counting (78.9% were found in data set than expected. ZINB and NBH regression considered zero dispersion and overdispersion were found to be more accurate results due to overdispersion and zero dispersion in sending e-mail. ZINB is determined to be best model accordingto Vuong statistics and information criteria.

  17. Wide-area traffic: The failure of Poisson modeling

    Energy Technology Data Exchange (ETDEWEB)

    Paxson, V.; Floyd, S.

    1994-08-01

    Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.

  18. Extension of the application of conway-maxwell-poisson models: analyzing traffic crash data exhibiting underdispersion.

    Science.gov (United States)

    Lord, Dominique; Geedipally, Srinivas Reddy; Guikema, Seth D

    2010-08-01

    The objective of this article is to evaluate the performance of the COM-Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM-Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over- or underdispersion. Over the last year, the COM-Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson-gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM-Poisson models were estimated using crash data collected at 162 railway-highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM-Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.

  19. High order Poisson Solver for unbounded flows

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe

    2015-01-01

    This paper presents a high order method for solving the unbounded Poisson equation on a regular mesh using a Green’s function solution. The high order convergence was achieved by formulating mollified integration kernels, that were derived from a filter regularisation of the solution field...... the equations of fluid mechanics as an example, but can be used in many physical problems to solve the Poisson equation on a rectangular unbounded domain. For the two-dimensional case we propose an infinitely smooth test function which allows for arbitrary high order convergence. Using Gaussian smoothing....... The method was implemented on a rectangular domain using fast Fourier transforms (FFT) to increase computational efficiency. The Poisson solver was extended to directly solve the derivatives of the solution. This is achieved either by including the differential operator in the integration kernel...

  20. Selective Contrast Adjustment by Poisson Equation

    Directory of Open Access Journals (Sweden)

    Ana-Belen Petro

    2013-09-01

    Full Text Available Poisson Image Editing is a new technique permitting to modify the gradient vector field of an image, and then to recover an image with a gradient approaching this modified gradient field. This amounts to solve a Poisson equation, an operation which can be efficiently performed by Fast Fourier Transform (FFT. This paper describes an algorithm applying this technique, with two different variants. The first variant enhances the contrast by increasing the gradient in the dark regions of the image. This method is well adapted to images with back light or strong shadows, and reveals details in the shadows. The second variant of the same Poisson technique enhances all small gradients in the image, thus also sometimes revealing details and texture.

  1. Poisson-Jacobi reduction of homogeneous tensors

    International Nuclear Information System (INIS)

    Grabowski, J; Iglesias, D; Marrero, J C; Padron, E; Urbanski, P

    2004-01-01

    The notion of homogeneous tensors is discussed. We show that there is a one-to-one correspondence between multivector fields on a manifold M, homogeneous with respect to a vector field Δ on M, and first-order polydifferential operators on a closed submanifold N of codimension 1 such that Δ is transversal to N. This correspondence relates the Schouten-Nijenhuis bracket of multivector fields on M to the Schouten-Jacobi bracket of first-order polydifferential operators on N and generalizes the Poissonization of Jacobi manifolds. Actually, it can be viewed as a super-Poissonization. This procedure of passing from a homogeneous multivector field to a first-order polydifferential operator can also be understood as a sort of reduction; in the standard case-a half of a Poisson reduction. A dual version of the above correspondence yields in particular the correspondence between Δ-homogeneous symplectic structures on M and contact structures on N

  2. Equilibrium stochastic dynamics of Poisson cluster ensembles

    Directory of Open Access Journals (Sweden)

    L.Bogachev

    2008-06-01

    Full Text Available The distribution μ of a Poisson cluster process in Χ=Rd (with n-point clusters is studied via the projection of an auxiliary Poisson measure in the space of configurations in Χn, with the intensity measure being the convolution of the background intensity (of cluster centres with the probability distribution of a generic cluster. We show that μ is quasi-invariant with respect to the group of compactly supported diffeomorphisms of Χ, and prove an integration by parts formula for μ. The corresponding equilibrium stochastic dynamics is then constructed using the method of Dirichlet forms.

  3. White Noise of Poisson Random Measures

    OpenAIRE

    Proske, Frank; Øksendal, Bernt

    2002-01-01

    We develop a white noise theory for Poisson random measures associated with a Lévy process. The starting point of this theory is a chaos expansion with kernels of polynomial type. We use this to construct the white noise of a Poisson random measure, which takes values in a certain distribution space. Then we show, how a Skorohod/Itô integral for point processes can be represented by a Bochner integral in terms of white noise of the random measure and a Wick product. Further, we apply these co...

  4. Climate change scenarios of temperature extremes evaluated using extreme value models based on homogeneous and non-homogeneous Poisson process

    Science.gov (United States)

    Kysely, Jan; Picek, Jan; Beranova, Romana; Plavcova, Eva

    2014-05-01

    The study compares statistical models for estimating high quantiles of daily temperatures based on the homogeneous and non-homogeneous Poisson process, and their applications in climate model simulations. Both types of the models make use of non-stationary peaks-over-threshold method and the Generalized Pareto distribution (GPD) for modelling extremes, but they differ in how the dependence of the model parameters on time index is captured. The homogeneous Poisson process model assumes that the intensity of the process is constant and the threshold used to delimit extremes changes with time; the non-homogeneous Poisson process assumes that the intensity of the process depends on time while the threshold is kept constant (Coles 2001). The model for time-dependency of the GPD parameters is selected according to the likelihood ratio test. Statistical arguments are provided to support the homogeneous Poisson process model, in which temporal dependence of the threshold is modelled in terms of regression quantiles (Kysely et al. 2010). Dependence of the results on the quantile chosen for the threshold (95-99%) is evaluated. The extreme value models are applied to analyse scenarios of changes in high quantiles of daily temperatures (20-yr and 100-yr return values) in transient simulations of several GCMs and RCMs for the 21st century. References: Coles S. (2001) An Introduction to Statistical Modeling of Extreme Values. Springer, 208 pp. Kysely J., Picek J., Beranova R. (2010) Estimating extremes in climate change simulations using the peaks-over-threshold method with a non-stationary threshold. Global and Planetary Change, 72, 55-68.

  5. "Logits and Tigers and Bears, Oh My! A Brief Look at the Simple Math of Logistic Regression and How It Can Improve Dissemination of Results"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2012-06-01

    Full Text Available Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These outcomes represent important social science lines of research: retention in, or dropout from school, using illicit drugs, underage alcohol consumption, antisocial behavior, purchasing decisions, voting patterns, risky behavior, and so on. The goal of this paper is to briefly lead the reader through the surprisingly simple mathematics that underpins logistic regression: probabilities, odds, odds ratios, and logits. Anyone with spreadsheet software or a scientific calculator can follow along, and in turn, this knowledge can be used to make much more interesting, clear, and accurate presentations of results (especially to non-technical audiences. In particular, I will share an example of an interaction in logistic regression, how it was originally graphed, and how the graph was made substantially more user-friendly by converting the original metric (logits to a more readily interpretable metric (probability through three simple steps.

  6. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  7. A dictionary learning approach for Poisson image deblurring.

    Science.gov (United States)

    Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong

    2013-07-01

    The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.

  8. Exponential Stability of Stochastic Systems with Delay and Poisson Jumps

    Directory of Open Access Journals (Sweden)

    Wenli Zhu

    2014-01-01

    Full Text Available This paper focuses on the model of a class of nonlinear stochastic delay systems with Poisson jumps based on Lyapunov stability theory, stochastic analysis, and inequality technique. The existence and uniqueness of the adapted solution to such systems are proved by applying the fixed point theorem. By constructing a Lyapunov function and using Doob’s martingale inequality and Borel-Cantelli lemma, sufficient conditions are given to establish the exponential stability in the mean square of such systems, and we prove that the exponentially stable in the mean square of such systems implies the almost surely exponentially stable. The obtained results show that if stochastic systems is exponentially stable and the time delay is sufficiently small, then the corresponding stochastic delay systems with Poisson jumps will remain exponentially stable, and time delay upper limit is solved by using the obtained results when the system is exponentially stable, and they are more easily verified and applied in practice.

  9. A hybrid sampler for Poisson-Kingman mixture models

    OpenAIRE

    Lomeli, M.; Favaro, S.; Teh, Y. W.

    2015-01-01

    This paper concerns the introduction of a new Markov Chain Monte Carlo scheme for posterior sampling in Bayesian nonparametric mixture models with priors that belong to the general Poisson-Kingman class. We present a novel compact way of representing the infinite dimensional component of the model such that while explicitly representing this infinite component it has less memory and storage requirements than previous MCMC schemes. We describe comparative simulation results demonstrating the e...

  10. Spatial Nonhomogeneous Poisson Process in Corrosion Management

    NARCIS (Netherlands)

    López De La Cruz, J.; Kuniewski, S.P.; Van Noortwijk, J.M.; Guriérrez, M.A.

    2008-01-01

    A method to test the assumption of nonhomogeneous Poisson point processes is implemented to analyze corrosion pit patterns. The method is calibrated with three artificially generated patterns and manages to accurately assess whether a pattern distribution is random, regular, or clustered. The

  11. Efficient information transfer by Poisson neurons

    Czech Academy of Sciences Publication Activity Database

    Košťál, Lubomír; Shinomoto, S.

    2016-01-01

    Roč. 13, č. 3 (2016), s. 509-520 ISSN 1547-1063 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : information capacity * Poisson neuron * metabolic cost * decoding error Subject RIV: BD - Theory of Information Impact factor: 1.035, year: 2016

  12. Natural Poisson structures of nonlinear plasma dynamics

    International Nuclear Information System (INIS)

    Kaufman, A.N.

    1982-06-01

    Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering

  13. Natural Poisson structures of nonlinear plasma dynamics

    International Nuclear Information System (INIS)

    Kaufman, A.N.

    1982-01-01

    Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering. (Auth.)

  14. Poisson brackets for fluids and plasmas

    International Nuclear Information System (INIS)

    Morrison, P.J.

    1982-01-01

    Noncanonical yet Hamiltonian descriptions are presented of many of the non-dissipative field equations that govern fluids and plasmas. The dynamical variables are the usually encountered physical variables. These descriptions have the advantage that gauge conditions are absent, but at the expense of introducing peculiar Poisson brackets. Clebsch-like potential descriptions that reverse this situations are also introduced

  15. Dimensional reduction for generalized Poisson brackets

    Science.gov (United States)

    Acatrinei, Ciprian Sorin

    2008-02-01

    We discuss dimensional reduction for Hamiltonian systems which possess nonconstant Poisson brackets between pairs of coordinates and between pairs of momenta. The associated Jacobi identities imply that the dimensionally reduced brackets are always constant. Some examples are given alongside the general theory.

  16. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  17. Improved mesh generator for the POISSON Group Codes

    International Nuclear Information System (INIS)

    Gupta, R.C.

    1987-01-01

    This paper describes the improved mesh generator of the POISSON Group Codes. These improvements enable one to have full control over the way the mesh is generated and in particular the way the mesh density is distributed throughout this model. A higher mesh density in certain regions coupled with a successively lower mesh density in others keeps the accuracy of the field computation high and the requirements on the computer time and computer memory low. The mesh is generated with the help of codes AUTOMESH and LATTICE; both have gone through a major upgrade. Modifications have also been made in the POISSON part of these codes. We shall present an example of a superconducting dipole magnet to explain how to use this code. The results of field computations are found to be reliable within a few parts in a hundred thousand even in such complex geometries

  18. An adaptive fast multipole accelerated Poisson solver for complex geometries

    Science.gov (United States)

    Askham, T.; Cerfon, A. J.

    2017-09-01

    We present a fast, direct and adaptive Poisson solver for complex two-dimensional geometries based on potential theory and fast multipole acceleration. More precisely, the solver relies on the standard decomposition of the solution as the sum of a volume integral to account for the source distribution and a layer potential to enforce the desired boundary condition. The volume integral is computed by applying the FMM on a square box that encloses the domain of interest. For the sake of efficiency and convergence acceleration, we first extend the source distribution (the right-hand side in the Poisson equation) to the enclosing box as a C0 function using a fast, boundary integral-based method. We demonstrate on multiply connected domains with irregular boundaries that this continuous extension leads to high accuracy without excessive adaptive refinement near the boundary and, as a result, to an extremely efficient "black box" fast solver.

  19. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    Science.gov (United States)

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. DEFINING THE EFFECTIVENESS OF FACTORS IN PROCESS OF DRYING INDUSTRIAL BAKERS YEAST BY USING TAGUCHI METHOD AND REGRESSION ANALYSIS, AND COMPARING THE RESULTS

    Directory of Open Access Journals (Sweden)

    Semra Boran

    2007-09-01

    Full Text Available Taguchi Method and Regression Analysis have wide spread applications in statistical researches. It can be said that Taguchi Method is one of the most frequently used method especially in optimization problems. But applications of this method are not common in food industry . In this study, optimal operating parameters were determined for industrial size fluidized bed dryer by using Taguchi method. Then the effects of operating parameters on activity value (the quality chracteristic of this problem were calculated by regression analysis. Finally, results of two methods were compared.To summarise, average activity value was found to be 660 for the 400 kg loading and average drying time 26 minutes by using the factors and levels taken from application of Taguchi Method. Whereas, in normal conditions (with 600 kg loading average activity value was found to be 630 and drying time 28 minutes. Taguchi Method application caused 15 % rise in activity value.

  1. Val-boroPro accelerates T cell priming via modulation of dendritic cell trafficking resulting in complete regression of established murine tumors.

    Directory of Open Access Journals (Sweden)

    Meghaan P Walsh

    Full Text Available Although tumors naturally prime adaptive immune responses, tolerance may limit the capacity to control progression and can compromise effectiveness of immune-based therapies for cancer. Post-proline cleaving enzymes (PPCE modulate protein function through N-terminal dipeptide cleavage and inhibition of these enzymes has been shown to have anti-tumor activity. We investigated the mechanism by which Val-boroPro, a boronic dipeptide that inhibits post-proline cleaving enzymes, mediates tumor regression and tested whether this agent could serve as a novel immune adjuvant to dendritic cell vaccines in two different murine syngeneic murine tumors. In mice challenged with MB49, which expresses the HY antigen complex, T cell responses primed by the tumor with and without Val-boroPro were measured using interferon gamma ELISPOT. Antibody depletion and gene-deficient mice were used to establish the immune cell subsets required for tumor regression. We demonstrate that Val-boroPro mediates tumor eradication by accelerating the expansion of tumor-specific T cells. Interestingly, T cells primed by tumor during Val-boroPro treatment demonstrate increased capacity to reject tumors following adoptive transfer without further treatment of the recipient. Val-boroPro -mediated tumor regression requires dendritic cells and is associated with enhanced trafficking of dendritic cells to tumor draining lymph nodes. Finally, dendritic cell vaccination combined with Val-boroPro treatment results in complete regression of established tumors. Our findings demonstrate that Val-boroPro has antitumor activity and a novel mechanism of action that involves more robust DC trafficking with earlier priming of T cells. Finally, we show that Val-boroPro has potent adjuvant properties resulting in an effective therapeutic vaccine.

  2. Obtaining adjusted prevalence ratios from logistic regression models in cross-sectional studies.

    Science.gov (United States)

    Bastos, Leonardo Soares; Oliveira, Raquel de Vasconcellos Carvalhaes de; Velasque, Luciane de Souza

    2015-03-01

    In the last decades, the use of the epidemiological prevalence ratio (PR) instead of the odds ratio has been debated as a measure of association in cross-sectional studies. This article addresses the main difficulties in the use of statistical models for the calculation of PR: convergence problems, availability of tools and inappropriate assumptions. We implement the direct approach to estimate the PR from binary regression models based on two methods proposed by Wilcosky & Chambless and compare with different methods. We used three examples and compared the crude and adjusted estimate of PR, with the estimates obtained by use of log-binomial, Poisson regression and the prevalence odds ratio (POR). PRs obtained from the direct approach resulted in values close enough to those obtained by log-binomial and Poisson, while the POR overestimated the PR. The model implemented here showed the following advantages: no numerical instability; assumes adequate probability distribution and, is available through the R statistical package.

  3. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  4. A generalized Poisson solver for first-principles device simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch [Nanoscale Simulations, ETH Zürich, 8093 Zürich (Switzerland); Brück, Sascha; Luisier, Mathieu [Integrated Systems Laboratory, ETH Zürich, 8092 Zürich (Switzerland)

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

  5. Poisson-Like Spiking in Circuits with Probabilistic Synapses

    Science.gov (United States)

    Moreno-Bote, Rubén

    2014-01-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  6. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    Energy Technology Data Exchange (ETDEWEB)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S. [Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel (Switzerland); Genovese, L. [University of Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Andreussi, O. [Institute of Computational Science, Università della Svizzera Italiana, Via Giuseppe Buffi 13, CH-6904 Lugano (Switzerland); Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland); Marzari, N. [Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland)

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  7. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    International Nuclear Information System (INIS)

    Fisicaro, G.; Goedecker, S.; Genovese, L.; Andreussi, O.; Marzari, N.

    2016-01-01

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes

  8. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    Science.gov (United States)

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  9. Linear odd Poisson bracket on Grassmann variables

    International Nuclear Information System (INIS)

    Soroka, V.A.

    1999-01-01

    A linear odd Poisson bracket (antibracket) realized solely in terms of Grassmann variables is suggested. It is revealed that the bracket, which corresponds to a semi-simple Lie group, has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, the second and the third orders with respect to Grassmann derivatives, in contrast with the canonical odd Poisson bracket having the only Grassmann-odd nilpotent differential Δ-operator of the second order. It is shown that these Δ-like operators together with a Grassmann-odd nilpotent Casimir function of this bracket form a finite-dimensional Lie superalgebra. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  10. Degenerate odd Poisson bracket on Grassmann variables

    International Nuclear Information System (INIS)

    Soroka, V.A.

    2000-01-01

    A linear degenerate odd Poisson bracket (antibracket) realized solely on Grassmann variables is proposed. It is revealed that this bracket has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, second and third orders with respect to the Grassmann derivatives. It is shown that these Δ-like operators, together with the Grassmann-odd nilpotent Casimir function of this bracket, form a finite-dimensional Lie superalgebra

  11. Poisson/Superfish codes for personal computers

    International Nuclear Information System (INIS)

    Humphries, S.

    1992-01-01

    The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs

  12. Elementary derivation of Poisson structures for fluid dynamics and electrodynamics

    International Nuclear Information System (INIS)

    Kaufman, A.N.

    1982-01-01

    The canonical Poisson structure of the microscopic Lagrangian is used to deduce the noncanonical Poisson structure for the macroscopic Hamiltonian dynamics of a compressible neutral fluid and of fluid electrodynamics

  13. Reduction of Nambu-Poisson Manifolds by Regular Distributions

    Science.gov (United States)

    Das, Apurba

    2018-03-01

    The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.

  14. An improved FMM Algorithm of the 3d-linearized Poisson-Boltzmann Equation

    Directory of Open Access Journals (Sweden)

    Mehrez issa

    2015-06-01

    Full Text Available This paper presents a new FMM algorithm for the linearized Poisson-Boltzmann equation in three dimensions. The performance of the proposed algorithm is assessed on a example in three dimensions and compared with the direct method. The numerical results show the power of the new method, that allow to achieve the best schemes to reduce the time of the particle interactions, which are based on diagonal form of translation operators for linearized Poisson-Boltzmann equation.

  15. Stochastic Averaging of Strongly Nonlinear Oscillators under Poisson White Noise Excitation

    Science.gov (United States)

    Zeng, Y.; Zhu, W. Q.

    A stochastic averaging method for single-degree-of-freedom (SDOF) strongly nonlinear oscillators under Poisson white noise excitation is proposed by using the so-called generalized harmonic functions. The stationary averaged generalized Fokker-Planck-Kolmogorov (GFPK) equation is solved by using the classical perturbation method. Then the procedure is applied to estimate the stationary probability density of response of a Duffing-van der Pol oscillator under Poisson white noise excitation. Theoretical results agree well with Monte Carlo simulations.

  16. Stability of the trivial solution for linear stochastic differential equations with Poisson white noise

    International Nuclear Information System (INIS)

    Grigoriu, Mircea; Samorodnitsky, Gennady

    2004-01-01

    Two methods are considered for assessing the asymptotic stability of the trivial solution of linear stochastic differential equations driven by Poisson white noise, interpreted as the formal derivative of a compound Poisson process. The first method attempts to extend a result for diffusion processes satisfying linear stochastic differential equations to the case of linear equations with Poisson white noise. The developments for the method are based on Ito's formula for semimartingales and Lyapunov exponents. The second method is based on a geometric ergodic theorem for Markov chains providing a criterion for the asymptotic stability of the solution of linear stochastic differential equations with Poisson white noise. Two examples are presented to illustrate the use and evaluate the potential of the two methods. The examples demonstrate limitations of the first method and the generality of the second method

  17. Modeling Tetanus Neonatorum case using the regression of negative binomial and zero-inflated negative binomial

    Science.gov (United States)

    Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni

    2017-12-01

    Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.

  18. Application of Negative Binomial Regression for Assessing Public ...

    African Journals Online (AJOL)

    Because the variance was nearly two times greater than the mean, the negative binomial regression model provided an improved fit to the data and accounted better for overdispersion than the Poisson regression model, which assumed that the mean and variance are the same. The level of education and race were found

  19. Characterization of stochastic resonance in a bistable system with Poisson white noise using statistical complexity measures

    Science.gov (United States)

    He, Meijuan; Xu, Wei; Sun, Zhongkui; Du, Lin

    2015-11-01

    This paper mainly investigates the phenomenon of stochastic resonance (SR) in a bistable system subjected to Poisson white noise. Statistical complexity measures, as new tools, are first employed to quantify SR phenomenon of given system with Poisson white noise. To begin with, the effect of Poisson white noise on SR phenomenon is studied. The results demonstrate that the curves of statistical complexity measures as a function of Poisson white noise intensity exhibit non-monotonous structure, revealing the existence of SR phenomenon. Besides, it should be noted that small mean arrival rate of Poisson white noise can promote the occurrence of SR. In order to verify the effectiveness of statistical complexity measures, signal-to-noise ratio (SNR) is also calculated. A good agreement among these results obtained by statistical complexity measures and SNR is achieved, which reveals that statistical complexity measures are suitable tools for characterizing SR phenomenon in the presence of Poisson white noise. Then, the effects of amplitude and frequency of different periodic signals, including cosine, rectangular and triangular signal, on SR behavior are investigated, respectively. One can observe that, in the case of same amplitude or frequency of signal, the influence of rectangular signal on SR phenomenon is the most significant among these three signals.

  20. Solving (2 + 1)-dimensional sine-Poisson equation by a modified variable separated ordinary differential equation method

    International Nuclear Information System (INIS)

    Ka-Lin, Su; Yuan-Xi, Xie

    2010-01-01

    By introducing a more general auxiliary ordinary differential equation (ODE), a modified variable separated ordinary differential equation method is presented for solving the (2 + 1)-dimensional sine-Poisson equation. As a result, many explicit and exact solutions of the (2 + 1)-dimensional sine-Poisson equation are derived in a simple manner by this technique. (general)

  1. Correlation of results obtained by in-vivo optical spectroscopy with measured blood oxygen saturation using a positive linear regression fit

    Science.gov (United States)

    McCormick, Patrick W.; Lewis, Gary D.; Dujovny, Manuel; Ausman, James I.; Stewart, Mick; Widman, Ronald A.

    1992-05-01

    Near infrared light generated by specialized instrumentation was passed through artificially oxygenated human blood during simultaneous sampling by a co-oximeter. Characteristic absorption spectra were analyzed to calculate the ratio of oxygenated to reduced hemoglobin. A positive linear regression fit between diffuse transmission oximetry and measured blood oxygenation over the range 23% to 99% (r2 equals .98, p signal was observed in the patient over time. The procedure was able to be performed clinically without difficulty; rSO2 values recorded continuously demonstrate the usefulness of the technique. Using the same instrumentation, arterial input and cerebral response functions, generated by IV tracer bolus, were deconvoluted to measure mean cerebral transit time. Date collected over time provided a sensitive index of changes in cerebral blood flow as a result of therapeutic maneuvers.

  2. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    Science.gov (United States)

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  3. Algebraic properties of compatible Poisson brackets

    Science.gov (United States)

    Zhang, Pumei

    2014-05-01

    We discuss algebraic properties of a pencil generated by two compatible Poisson tensors A( x) and B( x). From the algebraic viewpoint this amounts to studying the properties of a pair of skew-symmetric bilinear forms A and B defined on a finite-dimensional vector space. We describe the Lie group G P of linear automorphisms of the pencil P = { A + λB}. In particular, we obtain an explicit formula for the dimension of G P and discuss some other algebraic properties such as solvability and Levi-Malcev decomposition.

  4. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, Muhammad Usman

    2016-08-09

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  5. Combined application of information theory on laboratory results with classification and regression tree analysis: analysis of unnecessary biopsy for prostate cancer.

    Science.gov (United States)

    Hwang, Sang-Hyun; Pyo, Tina; Oh, Heung-Bum; Park, Hyun Jun; Lee, Kwan-Jeh

    2013-01-16

    The probability of a prostate cancer-positive biopsy result varies with PSA concentration. Thus, we applied information theory on classification and regression tree (CART) analysis for decision making predicting the probability of a biopsy result at various PSA concentrations. From 2007 to 2009, prostate biopsies were performed in 664 referred patients in a tertiary hospital. We created 2 CART models based on the information theory: one for moderate uncertainty (PSA concentration: 2.5-10 ng/ml) and the other for high uncertainty (PSA concentration: 10-25 ng/ml). The CART model for moderate uncertainty (n=321) had 3 splits based on PSA density (PSAD), hypoechoic nodules, and age and the other CART for high uncertainty (n=160) had 2 splits based on prostate volume and percent-free PSA. In this validation set, the patients (14.3% and 14.0% for moderate and high uncertainty groups, respectively) could avoid unnecessary biopsies without false-negative results. Using these CART models based on uncertainty information of PSA, the overall reduction in unnecessary prostate biopsies was 14.0-14.3% and CART models were simplified. Using uncertainty of laboratory results from information theoretic approach can provide additional information for decision analysis such as CART. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Logits and Tigers and Bears, Oh My! A Brief Look at the Simple Math of Logistic Regression and How It Can Improve Dissemination of Results

    Science.gov (United States)

    Osborne, Jason W.

    2012-01-01

    Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These…

  7. Binomial vs poisson statistics in radiation studies

    International Nuclear Information System (INIS)

    Foster, J.; Kouris, K.; Spyrou, N.M.; Matthews, I.P.; Welsh National School of Medicine, Cardiff

    1983-01-01

    The processes of radioactive decay, decay and growth of radioactive species in a radioactive chain, prompt emission(s) from nuclear reactions, conventional activation and cyclic activation are discussed with respect to their underlying statistical density function. By considering the transformation(s) that each nucleus may undergo it is shown that all these processes are fundamentally binomial. Formally, when the number of experiments N is large and the probability of success p is close to zero, the binomial is closely approximated by the Poisson density function. In radiation and nuclear physics, N is always large: each experiment can be conceived of as the observation of the fate of each of the N nuclei initially present. Whether p, the probability that a given nucleus undergoes a prescribed transformation, is close to zero depends on the process and nuclide(s) concerned. Hence, although a binomial description is always valid, the Poisson approximation is not always adequate. Therefore further clarification is provided as to when the binomial distribution must be used in the statistical treatment of detected events. (orig.)

  8. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  9. Sparsity-based Poisson denoising with dictionary learning.

    Science.gov (United States)

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.

  10. Prescription-induced jump distributions in multiplicative Poisson processes

    Science.gov (United States)

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  11. A new inverse regression model applied to radiation biodosimetry

    Science.gov (United States)

    Higueras, Manuel; Puig, Pedro; Ainsbury, Elizabeth A.; Rothkamm, Kai

    2015-01-01

    Biological dosimetry based on chromosome aberration scoring in peripheral blood lymphocytes enables timely assessment of the ionizing radiation dose absorbed by an individual. Here, new Bayesian-type count data inverse regression methods are introduced for situations where responses are Poisson or two-parameter compound Poisson distributed. Our Poisson models are calculated in a closed form, by means of Hermite and negative binomial (NB) distributions. For compound Poisson responses, complete and simplified models are provided. The simplified models are also expressible in a closed form and involve the use of compound Hermite and compound NB distributions. Three examples of applications are given that demonstrate the usefulness of these methodologies in cytogenetic radiation biodosimetry and in radiotherapy. We provide R and SAS codes which reproduce these examples. PMID:25663804

  12. On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action

    Science.gov (United States)

    Chekhov, L. O.; Mazzocco, M.

    2017-12-01

    Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.

  13. Predictors of adherence with self-care guidelines among persons with type 2 diabetes: results from a logistic regression tree analysis.

    Science.gov (United States)

    Yamashita, Takashi; Kart, Cary S; Noe, Douglas A

    2012-12-01

    Type 2 diabetes is known to contribute to health disparities in the U.S. and failure to adhere to recommended self-care behaviors is a contributing factor. Intervention programs face difficulties as a result of patient diversity and limited resources. With data from the 2005 Behavioral Risk Factor Surveillance System, this study employs a logistic regression tree algorithm to identify characteristics of sub-populations with type 2 diabetes according to their reported frequency of adherence to four recommended diabetes self-care behaviors including blood glucose monitoring, foot examination, eye examination and HbA1c testing. Using Andersen's health behavior model, need factors appear to dominate the definition of which sub-groups were at greatest risk for low as well as high adherence. Findings demonstrate the utility of easily interpreted tree diagrams to design specific culturally appropriate intervention programs targeting sub-populations of diabetes patients who need to improve their self-care behaviors. Limitations and contributions of the study are discussed.

  14. Periodic Poisson Solver for Particle Tracking

    International Nuclear Information System (INIS)

    Dohlus, M.; Henning, C.

    2015-05-01

    A method is described to solve the Poisson problem for a three dimensional source distribution that is periodic into one direction. Perpendicular to the direction of periodicity a free space (or open) boundary is realized. In beam physics, this approach allows to calculate the space charge field of a continualized charged particle distribution with periodic pattern. The method is based on a particle mesh approach with equidistant grid and fast convolution with a Green's function. The periodic approach uses only one period of the source distribution, but a periodic extension of the Green's function. The approach is numerically efficient and allows the investigation of periodic- and pseudo-periodic structures with period lengths that are small compared to the source dimensions, for instance of laser modulated beams or of the evolution of micro bunch structures. Applications for laser modulated beams are given.

  15. Compound Poisson Approximations for Sums of Random Variables

    OpenAIRE

    Serfozo, Richard F.

    1986-01-01

    We show that a sum of dependent random variables is approximately compound Poisson when the variables are rarely nonzero and, given they are nonzero, their conditional distributions are nearly identical. We give several upper bounds on the total-variation distance between the distribution of such a sum and a compound Poisson distribution. Included is an example for Markovian occurrences of a rare event. Our bounds are consistent with those that are known for Poisson approximations for sums of...

  16. Inexact Bregman iteration with an application to Poisson data reconstruction

    Science.gov (United States)

    Benfenati, A.; Ruggiero, V.

    2013-06-01

    This work deals with the solution of image restoration problems by an iterative regularization method based on the Bregman iteration. Any iteration of this scheme requires the exact computation of the minimizer of a function. However, in some image reconstruction applications, it is either impossible or extremely expensive to obtain exact solutions of these subproblems. In this paper, we propose an inexact version of the iterative procedure, where the inexactness in the inner subproblem solution is controlled by a criterion that preserves the convergence of the Bregman iteration and its features in image restoration problems. In particular, the method allows us to obtain accurate reconstructions also when only an overestimation of the regularization parameter is known. The introduction of the inexactness in the iterative scheme allows us to address image reconstruction problems from data corrupted by Poisson noise, exploiting the recent advances about specialized algorithms for the numerical minimization of the generalized Kullback-Leibler divergence combined with a regularization term. The results of several numerical experiments enable us to evaluate the proposed scheme for image deblurring or denoising in the presence of Poisson noise.

  17. A Tubular Biomaterial Construct Exhibiting a Negative Poisson's Ratio.

    Directory of Open Access Journals (Sweden)

    Jin Woo Lee

    Full Text Available Developing functional small-diameter vascular grafts is an important objective in tissue engineering research. In this study, we address the problem of compliance mismatch by designing and developing a 3D tubular construct that has a negative Poisson's ratio νxy (NPR. NPR constructs have the unique ability to expand transversely when pulled axially, thereby resulting in a highly-compliant tubular construct. In this work, we used projection stereolithography to 3D-print a planar NPR sheet composed of photosensitive poly(ethylene glycol diacrylate biomaterial. We used a step-lithography exposure and a stitch process to scale up the projection printing process, and used the cut-missing rib unit design to develop a centimeter-scale NPR sheet, which was rolled up to form a tubular construct. The constructs had Poisson's ratios of -0.6 ≤ νxy ≤ -0.1. The NPR construct also supports higher cellular adhesion than does the construct that has positive νxy. Our NPR design offers a significant advance in the development of highly-compliant vascular grafts.

  18. A Poisson log-bilinear regression approach to the construction of projected lifetables

    NARCIS (Netherlands)

    Brouhns, N.; Denuit, M.; Vermunt, J.K.

    2002-01-01

    This paper implements Wilmoth's [Computational methods for fitting and extrapolating the Lee¿Carter model of mortality change, Technical report, Department of Demography, University of California, Berkeley] and Alho's [North American Actuarial Journal 4 (2000) 91] recommendation for improving the

  19. Perturbation-induced emergence of Poisson-like behavior in non-Poisson systems

    International Nuclear Information System (INIS)

    Akin, Osman C; Grigolini, Paolo; Paradisi, Paolo

    2009-01-01

    The response of a system with ON–OFF intermittency to an external harmonic perturbation is discussed. ON–OFF intermittency is described by means of a sequence of random events, i.e., the transitions from the ON to the OFF state and vice versa. The unperturbed waiting times (WTs) between two events are assumed to satisfy a renewal condition, i.e., the WTs are statistically independent random variables. The response of a renewal model with non-Poisson ON–OFF intermittency, associated with non-exponential WT distribution, is analyzed by looking at the changes induced in the WT statistical distribution by the harmonic perturbation. The scaling properties are also studied by means of diffusion entropy analysis. It is found that, in the range of fast and relatively strong perturbation, the non-Poisson system displays a Poisson-like behavior in both WT distribution and scaling. In particular, the histogram of perturbed WTs becomes a sequence of equally spaced peaks, with intensity decaying exponentially in time. Further, the diffusion entropy detects an ordinary scaling (related to normal diffusion) instead of the expected unperturbed anomalous scaling related to the inverse power-law decay. Thus, an analysis based on the WT histogram and/or on scaling methods has to be considered with some care when dealing with perturbed intermittent systems

  20. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  1. Pansharpening via sparse regression

    Science.gov (United States)

    Tang, Songze; Xiao, Liang; Liu, Pengfei; Huang, Lili; Zhou, Nan; Xu, Yang

    2017-09-01

    Pansharpening is an effective way to enhance the spatial resolution of a multispectral (MS) image by fusing it with a provided panchromatic image. Instead of restricting the coding coefficients of low-resolution (LR) and high-resolution (HR) images to be equal, we propose a pansharpening approach via sparse regression in which the relationship between sparse coefficients of HR and LR MS images is modeled by ridge regression and elastic-net regression simultaneously learning the corresponding dictionaries. The compact dictionaries are learned based on the sampled patch pairs from the high- and low-resolution images, which can greatly characterize the structural information of the LR MS and HR MS images. Later, taking the complex relationship between the coding coefficients of LR MS and HR MS images into account, the ridge regression is used to characterize the relationship of intrapatches. The elastic-net regression is employed to describe the relationship of interpatches. Thus, the HR MS image can be almost identically reconstructed by multiplying the HR dictionary and the calculated sparse coefficient vector with the learned regression relationship. The simulated and real experimental results illustrate that the proposed method outperforms several well-known methods, both quantitatively and perceptually.

  2. Linear stability of stationary solutions of the Vlasov-Poisson system in three dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Batt, J.; Rein, G. [Muenchen Univ. (Germany). Mathematisches Inst.; Morrison, P.J. [Texas Univ., Austin, TX (United States)

    1993-03-01

    Rigorous results on the stability of stationary solutions of the Vlasov-Poisson system are obtained in both the plasma physics and stellar dynamics contexts. It is proven that stationary solutions in the plasma physics (stellar dynamics) case are linearly stable if they are decreasing (increasing) functions of the local, i.e. particle, energy. The main tool in the analysis is the free energy of the system, a conserved quantity. In addition, an appropriate global existence result is proven for the linearized Vlasov-Poisson system and the existence of stationary solutions that satisfy the above stability condition is established.

  3. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  4. Poisson cohomology of scalar multidimensional Dubrovin-Novikov brackets

    Science.gov (United States)

    Carlet, Guido; Casati, Matteo; Shadrin, Sergey

    2017-04-01

    We compute the Poisson cohomology of a scalar Poisson bracket of Dubrovin-Novikov type with D independent variables. We find that the second and third cohomology groups are generically non-vanishing in D > 1. Hence, in contrast with the D = 1 case, the deformation theory in the multivariable case is non-trivial.

  5. Avoiding negative populations in explicit Poisson tau-leaping.

    Science.gov (United States)

    Cao, Yang; Gillespie, Daniel T; Petzold, Linda R

    2005-08-01

    The explicit tau-leaping procedure attempts to speed up the stochastic simulation of a chemically reacting system by approximating the number of firings of each reaction channel during a chosen time increment tau as a Poisson random variable. Since the Poisson random variable can have arbitrarily large sample values, there is always the possibility that this procedure will cause one or more reaction channels to fire so many times during tau that the population of some reactant species will be driven negative. Two recent papers have shown how that unacceptable occurrence can be avoided by replacing the Poisson random variables with binomial random variables, whose values are naturally bounded. This paper describes a modified Poisson tau-leaping procedure that also avoids negative populations, but is easier to implement than the binomial procedure. The new Poisson procedure also introduces a second control parameter, whose value essentially dials the procedure from the original Poisson tau-leaping at one extreme to the exact stochastic simulation algorithm at the other; therefore, the modified Poisson procedure will generally be more accurate than the original Poisson procedure.

  6. Estimation of a Non-homogeneous Poisson Model: An Empirical ...

    African Journals Online (AJOL)

    This article aims at applying the Nonhomogeneous Poisson process to trends of economic development. For this purpose, a modified Nonhomogeneous Poisson process is derived when the intensity rate is considered as a solution of stochastic differential equation which satisfies the geometric Brownian motion. The mean ...

  7. Formulation of Hamiltonian mechanics with even and odd Poisson brackets

    International Nuclear Information System (INIS)

    Khudaverdyan, O.M.; Nersesyan, A.P.

    1987-01-01

    A possibility is studied as to constrict the odd Poisson bracket and odd Hamiltonian by the given dynamics in phase superspace - the even Poisson bracket and even Hamiltonian so the transition to the new structure does not change the equations of motion. 9 refs

  8. Cluster X-varieties, amalgamation, and Poisson-Lie groups

    DEFF Research Database (Denmark)

    Fock, V. V.; Goncharov, A. B.

    2006-01-01

    In this paper, starting from a split semisimple real Lie group G with trivial center, we define a family of varieties with additional structures. We describe them as cluster χ-varieties, as defined in [FG2]. In particular they are Poisson varieties. We define canonical Poisson maps of these varie...

  9. Derivation of relativistic wave equation from the Poisson process

    Indian Academy of Sciences (India)

    Abstract. A Poisson process is one of the fundamental descriptions for relativistic particles: both fermions and bosons. A generalized linear photon wave equation in dispersive and homogeneous medium with dissipation is derived using the formulation of the Poisson process. This formulation provides a possible ...

  10. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  11. Surgery for the correction of hallux valgus: minimum five-year results with a validated patient-reported outcome tool and regression analysis.

    Science.gov (United States)

    Chong, A; Nazarian, N; Chandrananth, J; Tacey, M; Shepherd, D; Tran, P

    2015-02-01

    This study sought to determine the medium-term patient-reported and radiographic outcomes in patients undergoing surgery for hallux valgus. A total of 118 patients (162 feet) underwent surgery for hallux valgus between January 2008 and June 2009. The Manchester-Oxford Foot Questionnaire (MOXFQ), a validated tool for the assessment of outcome after surgery for hallux valgus, was used and patient satisfaction was sought. The medical records and radiographs were reviewed retrospectively. At a mean of 5.2 years (4.7 to 6.0) post-operatively, the median combined MOXFQ score was 7.8 (IQR:0 to 32.8). The median domain scores for pain, walking/standing, and social interaction were 10 (IQR: 0 to 45), 0 (IQR: 0 to 32.1) and 6.3 (IQR: 0 to 25) respectively. A total of 119 procedures (73.9%, in 90 patients) were reported as satisfactory but only 53 feet (32.7%, in 43 patients) were completely asymptomatic. The mean (SD) correction of hallux valgus, intermetatarsal, and distal metatarsal articular angles was 18.5° (8.8°), 5.7° (3.3°), and 16.6° (8.8°), respectively. Multivariable regression analysis identified that an American Association of Anesthesiologists grade of >1 (Incident Rate Ratio (IRR) = 1.67, p-value = 0.011) and recurrent deformity (IRR = 1.77, p-value = 0.003) were associated with significantly worse MOXFQ scores. No correlation was found between the severity of deformity, the type, or degree of surgical correction and the outcome. When using a validated outcome score for the assessment of outcome after surgery for hallux valgus, the long-term results are worse than expected when compared with the short- and mid-term outcomes, with 25.9% of patients dissatisfied at a mean follow-up of 5.2 years. ©2015 The British Editorial Society of Bone & Joint Surgery.

  12. Stationary response of multi-degree-of-freedom vibro-impact systems to Poisson white noises

    International Nuclear Information System (INIS)

    Wu, Y.; Zhu, W.Q.

    2008-01-01

    The stationary response of multi-degree-of-freedom (MDOF) vibro-impact (VI) systems to random pulse trains is studied. The system is formulated as a stochastically excited and dissipated Hamiltonian system. The constraints are modeled as non-linear springs according to the Hertz contact law. The random pulse trains are modeled as Poisson white noises. The approximate stationary probability density function (PDF) for the response of MDOF dissipated Hamiltonian systems to Poisson white noises is obtained by solving the fourth-order generalized Fokker-Planck-Kolmogorov (FPK) equation using perturbation approach. As examples, two-degree-of-freedom (2DOF) VI systems under external and parametric Poisson white noise excitations, respectively, are investigated. The validity of the proposed approach is confirmed by using the results obtained from Monte Carlo simulation. It is shown that the non-Gaussian behaviour depends on the product of the mean arrival rate of the impulses and the relaxation time of the oscillator

  13. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    Science.gov (United States)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  14. Random walk in dynamically disordered chains: Poisson white noise disorder

    International Nuclear Information System (INIS)

    Hernandez-Garcia, E.; Pesquera, L.; Rodriguez, M.A.; San Miguel, M.

    1989-01-01

    Exact solutions are given for a variety of models of random walks in a chain with time-dependent disorder. Dynamic disorder is modeled by white Poisson noise. Models with site-independent (global) and site-dependent (local) disorder are considered. Results are described in terms of an affective random walk in a nondisordered medium. In the cases of global disorder the effective random walk contains multistep transitions, so that the continuous limit is not a diffusion process. In the cases of local disorder the effective process is equivalent to usual random walk in the absence of disorder but with slower diffusion. Difficulties associated with the continuous-limit representation of random walk in a disordered chain are discussed. In particular, the authors consider explicit cases in which taking the continuous limit and averaging over disorder sources do not commute

  15. Numerical solution of dynamic equilibrium models under Poisson uncertainty

    DEFF Research Database (Denmark)

    Posch, Olaf; Trimborn, Timo

    2013-01-01

    We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations...... of the retarded type. We apply the Waveform Relaxation algorithm, i.e., we provide a guess of the policy function and solve the resulting system of (deterministic) ordinary differential equations by standard techniques. For parametric restrictions, analytical solutions to the stochastic growth model and a novel...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households....

  16. On the FACR( l) algorithm for the discrete Poisson equation

    Science.gov (United States)

    Temperton, Clive

    1980-03-01

    Direct methods for the solution of the discrete Poisson equation over a rectangle are commonly based either on Fourier transforms or on block-cyclic reduction. The relationship between these two approaches is demonstrated explicitly, and used to derive the FACR( l) algorithm in which the Fourier transform approach is combined with l preliminary steps of cyclic reduction. It is shown that the optimum choice of l leads to an algorithm for which the operation count per mesh point is almost independent of the mesh size. Numerical results concerning timing and round-off error are presented for the N × N Dirichlet problem for various values of N and l. Extensions to more general problems, and to implementation on parallel or vector computers are briefly discussed.

  17. On the Magnetic Shield for a Vlasov-Poisson Plasma

    Science.gov (United States)

    Caprino, Silvia; Cavallaro, Guido; Marchioro, Carlo

    2017-12-01

    We study the screening of a bounded body Γ against the effect of a wind of charged particles, by means of a shield produced by a magnetic field which becomes infinite on the border of Γ . The charged wind is modeled by a Vlasov-Poisson plasma, the bounded body by a torus, and the external magnetic field is taken close to the border of Γ . We study two models: a plasma composed by different species with positive or negative charges, and finite total mass of each species, and another made of many species of the same sign, each having infinite mass. We investigate the time evolution of both systems, showing in particular that the plasma particles cannot reach the body. Finally we discuss possible extensions to more general initial data. We show also that when the magnetic lines are straight lines, (that imposes an unbounded body), the previous results can be improved.

  18. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei

    2016-02-15

    In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.

  19. Is neutron evaporation from highly excited nuclei a poisson random process

    International Nuclear Information System (INIS)

    Simbel, M.H.

    1982-01-01

    It is suggested that neutron emission from highly excited nuclei follows a Poisson random process. The continuous variable of the process is the excitation energy excess over the binding energy of the emitted neutrons and the discrete variable is the number of emitted neutrons. Cross sections for (HI,xn) reactions are analyzed using a formula containing a Poisson distribution function. The post- and pre-equilibrium components of the cross section are treated separately. The agreement between the predictions of this formula and the experimental results is very good. (orig.)

  20. Random vibrations of Rayleigh vibroimpact oscillator under Parametric Poisson white noise

    Science.gov (United States)

    Yang, Guidong; Xu, Wei; Jia, Wantao; He, Meijuan

    2016-04-01

    Random vibration problems for a single-degree-of-freedom (SDOF) Rayleigh vibroimpact system with a rigid barrier under parametric Poisson white noise are considered. The averaged generalized Fokker-Planck-Kolmogorov (FPK) equations with parametric Poisson white noise are derived after using the nonsmooth variable transformation and the approximate stationary solutions for the system's response are obtained by perturbation method. The results are validated numerically by using Monte Carlo simulations from original vibroimpact system. Effects on the response for different damping coefficients, restitution coefficients and noise intensities are discussed. Furthermore, stochastic bifurcations are also explored.

  1. Non-isothermal Smoluchowski-Poisson equation as a singular limit of the Navier-Stokes-Fourier-Poisson system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Laurençot, P.

    2007-01-01

    Roč. 88, - (2007), s. 325-349 ISSN 0021-7824 R&D Projects: GA ČR GA201/05/0164 Institutional research plan: CEZ:AV0Z10190503 Keywords : Navier-Stokes-Fourier- Poisson system * Smoluchowski- Poisson system * singular limit Subject RIV: BA - General Mathematics Impact factor: 1.118, year: 2007

  2. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  3. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepúlveda, Nuno

    2013-02-26

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  4. Boundary Lax pairs from non-ultra-local Poisson algebras

    International Nuclear Information System (INIS)

    Avan, Jean; Doikou, Anastasia

    2009-01-01

    We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example.

  5. Poisson noise reduction from X-ray images by region classification ...

    Indian Academy of Sciences (India)

    Medical imaging is perturbed with inherent noise such as speckle noise in ultrasound, Poisson noise in X-ray and Rician noise in MRI imaging. This paper focuses on X-ray image denoising problem. X-ray image quality could be improved by increasing dose value; however, this may result in cell death or similar kinds of ...

  6. The Analysis of Corporate Bond Valuation under an Infinite Dimensional Compound Poisson Framework

    Directory of Open Access Journals (Sweden)

    Sheng Fan

    2014-01-01

    Full Text Available This paper analyzes the firm bond valuation and credit spread with an endogenous model for the pure default and callable default corporate bond. Regarding the stochastic instantaneous forward rates and the firm value as an infinite dimensional Poisson process, we provide some analytical results for the embedded American options and firm bond valuations.

  7. Dynamic inventory rationing strategies for inventory systems with two demand classes, Poisson demand and backordering

    NARCIS (Netherlands)

    Teunter, Ruud H.; Haneveld, Willem K. Klein

    2008-01-01

    We study inventory systems with two demand classes (critical and non-critical), Poisson demand and backordering. We analyze dynamic rationing strategies where the number of items reserved for critical demand depends on the remaining time until the next order arrives. Different from results in the

  8. A high order solver for the unbounded Poisson equation

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe

    2013-01-01

    A high order converging Poisson solver is presented, based on the Greenʼs function solution to Poissonʼs equation subject to free-space boundary conditions. The high order convergence is achieved by formulating regularised integration kernels, analogous to a smoothing of the solution field....... The method is extended to directly solve the derivatives of the solution to Poissonʼs equation. In this way differential operators such as the divergence or curl of the solution field can be solved to the same high order convergence without additional computational effort. The method, is applied...... and validated, however not restricted, to the equations of fluid mechanics, and can be used in many applications to solve Poissonʼs equation on a rectangular unbounded domain....

  9. On the poisson's ratio of the nucleus pulposus.

    Science.gov (United States)

    Farrell, M D; Riches, P E

    2013-10-01

    Existing experimental data on the Poisson's ratio of nucleus pulposus (NP) tissue is limited. This study aims to determine whether the Poisson's ratio of NP tissue is strain-dependent, strain-rate-dependent, or varies with axial location in the disk. Thirty-two cylindrical plugs of bovine tail NP tissue were subjected to ramp-hold unconfined compression to 20% axial strain in 5% increments, at either 30 μm/s or 0.3 μm/s ramp speeds and the radial displacement determined using biaxial video extensometry. Following radial recoil, the true Poisson's ratio of the solid phase of NP tissue increased linearly with increasing strain and demonstrated strain-rate dependency. The latter finding suggests that the solid matrix undergoes stress relaxation during the test. For small strains, we suggest a Poisson's ratio of 0.125 to be used in biphasic models of the intervertebral disk.

  10. Organisation spatiale du peuplement de poissons dans le Bandama ...

    African Journals Online (AJOL)

    L'évolution des peuplements de poissons sur le Bandama a été étudiée en considérant quatre zones d'échantillonnage : en amont du lac de Kossou, dans les lacs de Kossou et de Taabo, entre les lacs de Kossou et de Taabo, et en aval du lac de Taabo. Au total, 74 espèces de poisson réparties en 49 genres, 28 familles ...

  11. Formality theory from Poisson structures to deformation quantization

    CERN Document Server

    Esposito, Chiara

    2015-01-01

    This book is a survey of the theory of formal deformation quantization of Poisson manifolds, in the formalism developed by Kontsevich. It is intended as an educational introduction for mathematical physicists who are dealing with the subject for the first time. The main topics covered are the theory of Poisson manifolds, star products and their classification, deformations of associative algebras and the formality theorem. Readers will also be familiarized with the relevant physical motivations underlying the purely mathematical construction.

  12. Poisson structure of the equations of ideal multispecies fluid electrodynamics

    International Nuclear Information System (INIS)

    Spencer, R.G.

    1984-01-01

    The equations of the two- (or multi-) fluid model of plasma physics are recast in Hamiltonian form, following general methods of symplectic geometry. The dynamical variables are the fields of physical interest, but are noncanonical, so that the Poisson bracket in the theory is not the standard one. However, it is a skew-symmetric bilinear form which, from the method of derivation, automatically satisfies the Jacobi identity; therefore, this noncanonical structure has all the essential properties of a canonical Poisson bracket

  13. On the Fedosov deformation quantization beyond the regular Poisson manifolds

    International Nuclear Information System (INIS)

    Dolgushev, V.A.; Isaev, A.P.; Lyakhovich, S.L.; Sharapov, A.A.

    2002-01-01

    A simple iterative procedure is suggested for the deformation quantization of (irregular) Poisson brackets associated to the classical Yang-Baxter equation. The construction is shown to admit a pure algebraic reformulation giving the Universal Deformation Formula (UDF) for any triangular Lie bialgebra. A simple proof of classification theorem for inequivalent UDF's is given. As an example the explicit quantization formula is presented for the quasi-homogeneous Poisson brackets on two-plane

  14. A Note On the Estimation of the Poisson Parameter

    Directory of Open Access Journals (Sweden)

    S. S. Chitgopekar

    1985-01-01

    distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.

  15. Ridge Regression: A Panacea?

    Science.gov (United States)

    Walton, Joseph M.; And Others

    1978-01-01

    Ridge regression is an approach to the problem of large standard errors of regression estimates of intercorrelated regressors. The effect of ridge regression on the estimated squared multiple correlation coefficient is discussed and illustrated. (JKS)

  16. Prognostic value of pathological lymph node status and primary tumour regression grading following neoadjuvant chemotherapy - results from the MRC OE02 oesophageal cancer trial.

    Science.gov (United States)

    Davarzani, Nasser; Hutchins, Gordon G A; West, Nicholas P; Hewitt, Lindsay C; Nankivell, Matthew; Cunningham, David; Allum, William H; Smyth, Elizabeth; Valeri, Nicola; Langley, Ruth E; Grabsch, Heike I

    2018-02-21

    Neoadjuvant chemotherapy (NAC) remains an important therapeutic option for advanced oesophageal cancer (OC). Pathological tumour regression grade (TRG) may offer additional information by directing adjuvant treatment and/or follow-up but its clinical value remains unclear. We analysed the prognostic value of TRG and associated pathological factors in OC patients enrolled in the Medical Research Council (MRC) OE02 trial. Histopathology was reviewed in 497 resections from OE02 trial participants randomised to surgery (S group; n = 244) or NAC followed by surgery [chemotherapy plus surgery (CS) group; n = 253]. The association between TRG groups [responders (TRG1-3) versus non-responders (TRG4-5)], pathological lymph node (LN) status and overall survival (OS) was analysed. One hundred and ninety-five of 253 (77%) CS patients were classified as 'non-responders', with a significantly higher mortality risk compared to responders [hazard ratio (HR) = 1.53, 95% confidence interval (CI) = 1.05-2.24, P = 0.026]. OS was significantly better in patients without LN metastases irrespective of TRG [non-responders HR = 1.87, 95% CI = 1.33-2.63, P < 0.001 versus responders HR = 2.21, 95% CI = 1.11-4.10, P = 0.024]. In multivariate analyses, LN status was the only independent factor predictive of OS in CS patients (HR = 1.93, 95% CI = 1.42-2.62, P < 0.001). Exploratory subgroup analyses excluding radiotherapy-exposed patients (n = 48) showed similar prognostic outcomes. Lymph node status post-NAC is the most important prognostic factor in patients with resectable oesophageal cancer, irrespective of TRG. Potential clinical implications, e.g. adjuvant treatment or intensified follow-up, reinforce the importance of LN dissection for staging and prognostication. © 2018 The Authors. Histopathology published by John Wiley & Sons Ltd.

  17. Using meta-regression analyses in addition to conventional systematic review methods to examine the variation in cost-effectiveness results : A case study

    NARCIS (Netherlands)

    L.T. Burgers (Laura); F.T. van de Wetering (Fleur); J.L. Severens (Hans); W.K. Redekop (Ken)

    2016-01-01

    textabstractBackground: Systematic reviews of cost-effectiveness analyses summarize results and describe study characteristics. Variability in the study results is often explained qualitatively or based on sensitivity analyses of individual studies. However, variability due to input parameters and

  18. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  19. Confidence limits for parameters of Poisson and binomial distributions

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-04-01

    The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

  20. Microscopic dynamics perspective on the relationship between Poisson's ratio and ductility of metallic glasses

    Science.gov (United States)

    Ngai, K. L.; Wang, Li-Min; Liu, Riping; Wang, W. H.

    2014-01-01

    In metallic glasses a clear correlation had been established between plasticity or ductility with the Poisson's ratio νPoisson and alternatively the ratio of the elastic bulk modulus to the shear modulus, K/G. Such a correlation between these two macroscopic mechanical properties is intriguing and is challenging to explain from the dynamics on a microscopic level. A recent experimental study has found a connection of ductility to the secondary β-relaxation in metallic glasses. The strain rate and temperature dependencies of the ductile-brittle transition are similar to the reciprocal of the secondary β-relaxation time, τβ. Moreover, metallic glass is more ductile if the relaxation strength of the β-relaxation is larger and τβ is shorter. The findings indicate the β-relaxation is related to and instrumental for ductility. On the other hand, K/G or νPoisson is related to the effective Debye-Waller factor (i.e., the non-ergodicity parameter), f0, characterizing the dynamics of a structural unit inside a cage formed by other units, and manifested as the nearly constant loss shown in the frequency dependent susceptibility. We make the connection of f0 to the non-exponentiality parameter n in the Kohlrausch stretched exponential correlation function of the structural α-relaxation function, φ (t) = exp [ { - ( {t/{τ _α }})^{1 - n} }]. This connection follows from the fact that both f0 and n are determined by the inter-particle potential, and 1/f0 or (1 - f0) and n both increase with anharmonicity of the potential. A well tested result from the Coupling Model is used to show that τβ is completely determined by τα and n. From the string of relations, (i) K/G or νPoisson with 1/f0 or (1 - f0), (ii) 1/f0 or (1 - f0) with n, and (iii) τα and n with τβ, we arrive at the desired relation between K/G or νPoisson and τβ. On combining this relation with that between ductility and τβ, we have finally an explanation of the empirical correlation between

  1. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  2. Regression to Causality

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    Humans are fundamentally primed for making causal attributions based on correlations. This implies that researchers must be careful to present their results in a manner that inhibits unwarranted causal attribution. In this paper, we present the results of an experiment that suggests regression...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  3. Advantages and Limitations of Anticipating Laboratory Test Results from Regression- and Tree-Based Rules Derived from Electronic Health-Record Data

    OpenAIRE

    Mohammad, Fahim; Theisen-Toupal, Jesse C.; Arnaout, Ramy

    2014-01-01

    Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval ("normal"). We analyzed 10 years of electronic health records--a total of 69.4 million blood tests--to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to thei...

  4. Cooperative HARQ with Poisson Interference and Opportunistic Routing

    KAUST Repository

    Kaveh, Mostafa

    2014-01-06

    This presentation considers reliable transmission of data from a source to a destination, aided cooperatively by wireless relays selected opportunistically and utilizing hybrid forward error correction/detection, and automatic repeat request (Hybrid ARQ, or HARQ). Specifically, we present a performance analysis of the cooperative HARQ protocol in a wireless adhoc multihop network employing spatial ALOHA. We model the nodes in such a network by a homogeneous 2-D Poisson point process. We study the tradeoff between the per-hop rate, spatial density and range of transmissions inherent in the network by optimizing the transport capacity with respect to the network design parameters, HARQ coding rate and medium access probability. We obtain an approximate analytic expression for the expected progress of opportunistic routing and optimize the capacity approximation by convex optimization. By way of numerical results, we show that the network design parameters obtained by optimizing the analytic approximation of transport capacity closely follows that of Monte Carlo based exact transport capacity optimization. As a result of the analysis, we argue that the optimal HARQ coding rate and medium access probability are independent of the node density in the network.

  5. METHOD OF FOREST FIRES PROBABILITY ASSESSMENT WITH POISSON LAW

    Directory of Open Access Journals (Sweden)

    A. S. Plotnikova

    2016-01-01

    Full Text Available The article describes the method for the forest fire burn probability estimation on a base of Poisson distribution. The λ parameter is assumed to be a mean daily number of fires detected for each Forest Fire Danger Index class within specific period of time. Thus, λ was calculated for spring, summer and autumn seasons separately. Multi-annual daily Forest Fire Danger Index values together with EO-derived hot spot map were input data for the statistical analysis. The major result of the study is generation of the database on forest fire burn probability. Results were validated against EO daily data on forest fires detected over Irkutsk oblast in 2013. Daily weighted average probability was shown to be linked with the daily number of detected forest fires. Meanwhile, there was found a number of fires which were developed when estimated probability was low. The possible explanation of this phenomenon was provided.

  6. Predicting Engineering Student Attrition Risk Using a Probabilistic Neural Network and Comparing Results with a Backpropagation Neural Network and Logistic Regression

    Science.gov (United States)

    Mason, Cindi; Twomey, Janet; Wright, David; Whitman, Lawrence

    2018-01-01

    As the need for engineers continues to increase, a growing focus has been placed on recruiting students into the field of engineering and retaining the students who select engineering as their field of study. As a result of this concentration on student retention, numerous studies have been conducted to identify, understand, and confirm…

  7. Advantages and limitations of anticipating laboratory test results from regression- and tree-based rules derived from electronic health-record data.

    Directory of Open Access Journals (Sweden)

    Fahim Mohammad

    Full Text Available Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval ("normal". We analyzed 10 years of electronic health records--a total of 69.4 million blood tests--to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to their positive and negative predictive value (PPV and NPV and area under the receiver-operator characteristic curve (ROC AUCs. Using a stringent cutoff of PPV and/or NPV≥0.95, standard techniques yield few rules for sendout tests but several for in-house tests, mostly for repeat laboratory tests that are part of the complete blood count and basic metabolic panel. Most rules were clinically and pathophysiologically plausible, and several seemed clinically useful for informing pre-test probability of a given result. But overall, rules were unlikely to be able to function as a general substitute for actually ordering a test. Improving laboratory utilization will likely require different input data and/or alternative methods.

  8. Advantages and limitations of anticipating laboratory test results from regression- and tree-based rules derived from electronic health-record data.

    Science.gov (United States)

    Mohammad, Fahim; Theisen-Toupal, Jesse C; Arnaout, Ramy

    2014-01-01

    Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval ("normal"). We analyzed 10 years of electronic health records--a total of 69.4 million blood tests--to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to their positive and negative predictive value (PPV and NPV) and area under the receiver-operator characteristic curve (ROC AUCs). Using a stringent cutoff of PPV and/or NPV≥0.95, standard techniques yield few rules for sendout tests but several for in-house tests, mostly for repeat laboratory tests that are part of the complete blood count and basic metabolic panel. Most rules were clinically and pathophysiologically plausible, and several seemed clinically useful for informing pre-test probability of a given result. But overall, rules were unlikely to be able to function as a general substitute for actually ordering a test. Improving laboratory utilization will likely require different input data and/or alternative methods.

  9. Poisson process approximation for sequence repeats, and sequencing by hybridization.

    Science.gov (United States)

    Arratia, R; Martin, D; Reinert, G; Waterman, M S

    1996-01-01

    Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds.

  10. Downlink Non-Orthogonal Multiple Access (NOMA) in Poisson Networks

    KAUST Repository

    Ali, Konpal S.

    2018-03-21

    A network model is considered where Poisson distributed base stations transmit to $N$ power-domain non-orthogonal multiple access (NOMA) users (UEs) each that employ successive interference cancellation (SIC) for decoding. We propose three models for the clustering of NOMA UEs and consider two different ordering techniques for the NOMA UEs: mean signal power-based and instantaneous signal-to-intercell-interference-and-noise-ratio-based. For each technique, we present a signal-to-interference-and-noise ratio analysis for the coverage of the typical UE. We plot the rate region for the two-user case and show that neither ordering technique is consistently superior to the other. We propose two efficient algorithms for finding a feasible resource allocation that maximize the cell sum rate $\\\\mathcal{R}_{\\ m tot}$, for general $N$, constrained to: 1) a minimum rate $\\\\mathcal{T}$ for each UE, 2) identical rates for all UEs. We show the existence of: 1) an optimum $N$ that maximizes the constrained $\\\\mathcal{R}_{\\ m tot}$ given a set of network parameters, 2) a critical SIC level necessary for NOMA to outperform orthogonal multiple access. The results highlight the importance in choosing the network parameters $N$, the constraints, and the ordering technique to balance the $\\\\mathcal{R}_{\\ m tot}$ and fairness requirements. We also show that interference-aware UE clustering can significantly improve performance.

  11. Use of a combination of routine hematologic and biochemical test results in a logistic regression model as a diagnostic aid for the diagnosis of hypoadrenocorticism in dogs.

    Science.gov (United States)

    Borin-Crivellenti, Sofia; Garabed, Rebecca B; Moreno-Torres, Karla I; Wellman, Maxey L; Gilor, Chen

    2017-10-01

    OBJECTIVE To assess the discriminatory value for corticosteroid-induced alkaline phosphatase (CiALP) activity and other variables that can be measured routinely on a CBC and biochemical analysis for the diagnosis of hypoadrenocorticism in dogs. SAMPLE Medical records of 57 dogs with confirmed hypoadrenocorticism and 57 control dogs in which hypoadrenocorticism was suspected but ruled out. PROCEDURES A retrospective case-control study was conducted. Dogs were included if a CBC and complete biochemical analysis had been performed. Dogs with iatrogenic hypoadrenocorticism and dogs treated previously with glucocorticoids were excluded. Cortisol concentration for dogs with hypoadrenocorticism was ≤ 2 μg/dL both before and after ACTH administration. Cortisol concentration for control dogs was > 4 μg/dL before or after ACTH administration. RESULTS Area under the receiver operating characteristic (ROC) curve for CiALP activity was low (0.646; 95% confidence interval, 0.494 to 0.798). Area under the ROC curve for a model that combined the CiALP activity, Na-to-K ratio, eosinophil count, activity of creatine kinase, and concentrations of SUN and albumin was high (0.994; 95% confidence interval, 0.982 to 1.000). Results for this model could be used to correctly classify all dogs, except for 1 dog with hypoadrenocorticism and no electrolyte abnormalities. CONCLUSIONS AND CLINICAL RELEVANCE CiALP activity alone cannot be used as a reliable diagnostic test for hypoadrenocorticism in dogs. Combined results for CiALP activity, Na-to-K ratio, eosinophil count, creatine kinase activity, and concentrations of SUN and albumin provided an excellent means to discriminate between hypoadrenocorticism and diseases that mimic hypoadrenocorticism.

  12. A Generalized FDM for solving the Poisson's Equation on 3D Irregular Domains

    Directory of Open Access Journals (Sweden)

    J. Izadian

    2014-01-01

    Full Text Available In this paper a new method for solving the Poisson's equation with Dirichlet conditions on irregular domains is presented. For this purpose a generalized finite differences method is applied for numerical differentiation on irregular meshes. Three examples on cylindrical and spherical domains are considered. The numerical results are compared with analytical solution. These results show the performance and efficiency of the proposed method.

  13. A spectral Poisson solver for kinetic plasma simulation

    Science.gov (United States)

    Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf

    2011-10-01

    Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

  14. A high order solver for the unbounded Poisson equation

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe

    In mesh-free particle methods a high order solution to the unbounded Poisson equation is usually achieved by constructing regularised integration kernels for the Biot-Savart law. Here the singular, point particles are regularised using smoothed particles to obtain an accurate solution with an order...... of convergence consistent with the moments conserved by the applied smoothing function. In the hybrid particle-mesh method of Hockney and Eastwood (HE) the particles are interpolated onto a regular mesh where the unbounded Poisson equation is solved by a discrete non-cyclic convolution of the mesh values...... and the integration kernel. In this work we show an implementation of high order regularised integration kernels in the HE algorithm for the unbounded Poisson equation to formally achieve an arbitrary high order convergence. We further present a quantitative study of the convergence rate to give further insight...

  15. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  16. Effect of Poisson noise on adiabatic quantum control

    Science.gov (United States)

    Kiely, A.; Muga, J. G.; Ruschhaupt, A.

    2017-01-01

    We present a detailed derivation of the master equation describing a general time-dependent quantum system with classical Poisson white noise and outline its various properties. We discuss the limiting cases of Poisson white noise and provide approximations for the different noise strength regimes. We show that using the eigenstates of the noise superoperator as a basis can be a useful way of expressing the master equation. Using this, we simulate various settings to illustrate different effects of Poisson noise. In particular, we show a dip in the fidelity as a function of noise strength where high fidelity can occur in the strong-noise regime for some cases. We also investigate recent claims [J. Jing et al., Phys. Rev. A 89, 032110 (2014), 10.1103/PhysRevA.89.032110] that this type of noise may improve rather than destroy adiabaticity.

  17. A Criterium for the Strict Positivity of the Density of the Law of a Poisson Process

    Directory of Open Access Journals (Sweden)

    Léandre Rémi

    2011-01-01

    Full Text Available We translate in semigroup theory our result (Léandre, 1990 giving a necessary condition so that the law of a Markov process with jumps could have a strictly positive density. This result express, that we have to jump in a finite number of jumps in a "submersive" way from the starting point to the end point if the density of the jump process is strictly positive in . We use the Malliavin Calculus of Bismut type of (Léandre, (2008;2010 translated in semi-group theory as a tool, and the interpretation in semi-group theory of some classical results of the stochastic analysis for Poisson process as, for instance, the formula giving the law of a compound Poisson process.

  18. Correction for Poisson's effect in an elastic analysis of low cycle fatigue

    International Nuclear Information System (INIS)

    Roche, R.; Moulin, D.

    1984-05-01

    Fatigue behaviour is essentially dependent on the real strain range, but the current practice is the use of elastic analysis. In low cycle fatigue conditions where inelastic strains predominate, elastic analysis never gives the real value of the strain range. In order to use these results some corrections are necessary. One of these corrections is due to the Poisson's effect (the Poisson ratio in inelastic behaviour is higher than in elastic behaviour). In this paper a method of correction of this effect is proposed. It consists in multiplying the results of the elastic analysis by a coefficient called Kν. A method to draw curves giving this coefficient Kν as a function of results of elastic analysis is developped. Only simple analytical computations using the unixial cyclic curve are needed to draw these curves. Examples are given. The proposed method is very convenient and low cost effective [fr

  19. Detecting Randomness: the Sensitivity of Statistical Tests to Deviations from a Constant Rate Poisson Process

    Science.gov (United States)

    Michael, A. J.

    2012-12-01

    Detecting trends in the rate of sporadic events is a problem for earthquakes and other natural hazards such as storms, floods, or landslides. I use synthetic events to judge the tests used to address this problem in seismology and consider their application to other hazards. Recent papers have analyzed the record of magnitude ≥7 earthquakes since 1900 and concluded that the events are consistent with a constant rate Poisson process plus localized aftershocks (Michael, GRL, 2011; Shearer and Stark, PNAS, 2012; Daub et al., GRL, 2012; Parsons and Geist, BSSA, 2012). Each paper removed localized aftershocks and then used a different suite of statistical tests to test the null hypothesis that the remaining data could be drawn from a constant rate Poisson process. The methods include KS tests between event times or inter-event times and predictions from a Poisson process, the autocorrelation function on inter-event times, and two tests on the number of events in time bins: the Poisson dispersion test and the multinomial chi-square test. The range of statistical tests gives us confidence in the conclusions; which are robust with respect to the choice of tests and parameters. But which tests are optimal and how sensitive are they to deviations from the null hypothesis? The latter point was raised by Dimer (arXiv, 2012), who suggested that the lack of consideration of Type 2 errors prevents these papers from being able to place limits on the degree of clustering and rate changes that could be present in the global seismogenic process. I produce synthetic sets of events that deviate from a constant rate Poisson process using a variety of statistical simulation methods including Gamma distributed inter-event times and random walks. The sets of synthetic events are examined with the statistical tests described above. Preliminary results suggest that with 100 to 1000 events, a data set that does not reject the Poisson null hypothesis could have a variability that is 30% to

  20. Poisson cluster analysis of cardiac arrest incidence in Columbus, Ohio.

    Science.gov (United States)

    Warden, Craig; Cudnik, Michael T; Sasson, Comilla; Schwartz, Greg; Semple, Hugh

    2012-01-01

    Scarce resources in disease prevention and emergency medical services (EMS) need to be focused on high-risk areas of out-of-hospital cardiac arrest (OHCA). Cluster analysis using geographic information systems (GISs) was used to find these high-risk areas and test potential predictive variables. This was a retrospective cohort analysis of EMS-treated adults with OHCAs occurring in Columbus, Ohio, from April 1, 2004, through March 31, 2009. The OHCAs were aggregated to census tracts and incidence rates were calculated based on their adult populations. Poisson cluster analysis determined significant clusters of high-risk census tracts. Both census tract-level and case-level characteristics were tested for association with high-risk areas by multivariate logistic regression. A total of 2,037 eligible OHCAs occurred within the city limits during the study period. The mean incidence rate was 0.85 OHCAs/1,000 population/year. There were five significant geographic clusters with 76 high-risk census tracts out of the total of 245 census tracts. In the case-level analysis, being in a high-risk cluster was associated with a slightly younger age (-3 years, adjusted odds ratio [OR] 0.99, 95% confidence interval [CI] 0.99-1.00), not being white, non-Hispanic (OR 0.54, 95% CI 0.45-0.64), cardiac arrest occurring at home (OR 1.53, 95% CI 1.23-1.71), and not receiving bystander cardiopulmonary resuscitation (CPR) (OR 0.77, 95% CI 0.62-0.96), but with higher survival to hospital discharge (OR 1.78, 95% CI 1.30-2.46). In the census tract-level analysis, high-risk census tracts were also associated with a slightly lower average age (-0.1 years, OR 1.14, 95% CI 1.06-1.22) and a lower proportion of white, non-Hispanic patients (-0.298, OR 0.04, 95% CI 0.01-0.19), but also a lower proportion of high-school graduates (-0.184, OR 0.00, 95% CI 0.00-0.00). This analysis identified high-risk census tracts and associated census tract-level and case-level characteristics that can be used to

  1. Double generalized linear compound poisson models to insurance claims data

    DEFF Research Database (Denmark)

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... in a finite sample framework. The simulation studies are also used to validate the fitting algorithms and check the computational implementation. Furthermore, we investigate the impact of an unsuitable choice for the response variable distribution on both mean and dispersion parameter estimates. We provide R...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  2. Quadratic Hamiltonians on non-symmetric Poisson structures

    International Nuclear Information System (INIS)

    Arribas, M.; Blesa, F.; Elipe, A.

    2007-01-01

    Many dynamical systems may be represented in a set of non-canonical coordinates that generate an su(2) algebraic structure. The topology of the phase space is the one of the S 2 sphere, the Poisson structure is the one of the rigid body, and the Hamiltonian is a parametric quadratic form in these 'spherical' coordinates. However, there are other problems in which the Poisson structure losses its symmetry. In this paper we analyze this case and, we show how the loss of the spherical symmetry affects the phase flow and parametric bifurcations for the bi-parametric cases

  3. Gyrokinetic energy conservation and Poisson-bracket formulation

    International Nuclear Information System (INIS)

    Brizard, A.

    1988-11-01

    An integral expression for the gyrokinetic total energy of a magnetized plasma with general magnetic field configuration perturbed by fully electromagnetic fields was recently derived through the use of a gyro-center Lie transformation. We show that the gyrokinetic energy is conserved by the gyrokinetic Hamiltonian flow to all orders in perturbed fields. This paper is concerned with the explicit demonstration that a gyrokinetic Hamiltonian containing quadratic nonlinearities preserves the gyrokinetic energy up to third order. The Poisson-bracket formulation greatly facilitates this demonstration with the help of the Jacobi identity and other properties of the Poisson brackets. 18 refs

  4. Adaptive maximal poisson-disk sampling on surfaces

    KAUST Repository

    Yan, Dongming

    2012-01-01

    In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.

  5. Efficient maximal Poisson-disk sampling and remeshing on surfaces

    KAUST Repository

    Guo, Jianwei

    2015-02-01

    Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.

  6. Nonparametric modal regression

    OpenAIRE

    Chen, Yen-Chi; Genovese, Christopher R.; Tibshirani, Ryan J.; Wasserman, Larry

    2016-01-01

    Modal regression estimates the local modes of the distribution of $Y$ given $X=x$, instead of the mean, as in the usual regression sense, and can hence reveal important structure missed by usual regression methods. We study a simple nonparametric method for modal regression, based on a kernel density estimate (KDE) of the joint distribution of $Y$ and $X$. We derive asymptotic error bounds for this method, and propose techniques for constructing confidence sets and prediction sets. The latter...

  7. Flexible survival regression modelling

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben

    2009-01-01

    Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...

  8. Transmission tomography under Poisson noise using the Anscombe transformation and Wiener filtering of the projections

    CERN Document Server

    Mascarenhas, N D A; Cruvinel, P E

    1999-01-01

    A minitomograph scanner for soil science was developed by the National Center for Research and Development of Agricultural Instrumentation (EMBRAPA/CNPDIA). The purpose of this paper is twofold. First, a statistical characterization of the noise affecting the projection measurements of this scanner is presented. Second, having determined the Poisson nature of this noise, a new method of filtering the projection data prior to the reconstruction is proposed. It is based on transforming the Poisson noise into Gaussian additive noise, filtering the projections in blocks through the Wiener filter and performing the inverse tranformation. Results with real data indicate that this method gives superior results, as compared to conventional backprojection with the ramp filter, by taking into consideration both resolution and noise, through a mean square error criterion.

  9. Transmission tomography under Poisson noise using the Anscombe transformation and Wiener filtering of the projections

    International Nuclear Information System (INIS)

    Mascarenhas, Nelson D.A.; Santos, Cid A.N.; Cruvinel, Paulo E.

    1999-01-01

    A minitomograph scanner for soil science was developed by the National Center for Research and Development of Agricultural Instrumentation (EMBRAPA/CNPDIA). The purpose of this paper is twofold. First, a statistical characterization of the noise affecting the projection measurements of this scanner is presented. Second, having determined the Poisson nature of this noise, a new method of filtering the projection data prior to the reconstruction is proposed. It is based on transforming the Poisson noise into Gaussian additive noise, filtering the projections in blocks through the Wiener filter and performing the inverse tranformation. Results with real data indicate that this method gives superior results, as compared to conventional backprojection with the ramp filter, by taking into consideration both resolution and noise, through a mean square error criterion

  10. Adiabatic elimination for systems with inertia driven by compound Poisson colored noise

    Science.gov (United States)

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2014-02-01

    We consider the dynamics of systems driven by compound Poisson colored noise in the presence of inertia. We study the limit when the frictional relaxation time and the noise autocorrelation time both tend to zero. We show that the Itô and Marcus stochastic calculuses naturally arise depending on these two time scales, and an extra intermediate type occurs when the two time scales are comparable. This leads to three different limiting regimes which are supported by numerical simulations. Furthermore, we establish that when the resulting compound Poisson process tends to the Wiener process in the frequent jump limit the Itô and Marcus calculuses, respectively, tend to the classical Itô and Stratonovich calculuses for Gaussian white noise, and the crossover type calculus tends to a crossover between the Itô and Stratonovich calculuses. Our results would be very helpful for understanding relevant experiments when jump type noise is involved.

  11. Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP

    Science.gov (United States)

    Chan, Tony F.; Fatoohi, Rod A.

    1990-01-01

    The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.

  12. Uso de un modelo log-lineal de Poisson para el estudio de los homicidios contra jóvenes inmigrantes nicaragüenses en Costa Rica

    Directory of Open Access Journals (Sweden)

    Roger E. Bonilla

    2017-01-01

    Full Text Available Objective: To describe the homicides rate for young Nicaraguan immigrants in Costa Rica. Methods: We used a Poisson log-linear regression model at small administrative areas-level to describe the homicides rate for young Nicaraguan immigrants in Costa Rica, given the co-variables (Agresti, 2002; Cameron y Trivedi, 1998; Neter, Wasserman, Kutner & Nachtsheim, 1996. Results: Incidence rate for percentage of poor households, percentage of adults 35 years-old and more, and the percentage of the population on the service sector of the economy are 1.04, 1.05 and 1.05 respectively. Conclusions: Poverty, population structure and economic activities of the service sector of the economy are the co-variables that best describe the homicides rate for young Nicaraguan immigrants in Costa Rica.

  13. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  14. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  15. Applications of some discrete regression models for count data

    Directory of Open Access Journals (Sweden)

    B. M. Golam Kibria

    2006-01-01

    Full Text Available In this paper we have considered several regression models to fit the count data that encounter in the field of Biometrical, Environmental, Social Sciences and Transportation Engineering. We have fitted Poisson (PO, Negative Binomial (NB, Zero-Inflated Poisson (ZIP and Zero-Inflated Negative Binomial (ZINB regression models to run-off-road (ROR crash data which collected on arterial roads in south region (rural of Florida State. To compare the performance of these models, we analyzed data with moderate to high percentage of zero counts. Because the variances were almost three times greater than the means, it appeared that both NB and ZINB models performed better than PO and ZIP models for the zero inflated and over dispersed count data.

  16. Normal forms of dispersive scalar Poisson brackets with two independent variables

    Science.gov (United States)

    Carlet, Guido; Casati, Matteo; Shadrin, Sergey

    2018-03-01

    We classify the dispersive Poisson brackets with one dependent variable and two independent variables, with leading order of hydrodynamic type, up to Miura transformations. We show that, in contrast to the case of a single independent variable for which a well-known triviality result exists, the Miura equivalence classes are parametrised by an infinite number of constants, which we call numerical invariants of the brackets. We obtain explicit formulas for the first few numerical invariants.

  17. Path integral quantization of the Symplectic Leaves of the SU(2)*Poisson-Lie Group

    International Nuclear Information System (INIS)

    Morariu, B.

    1997-01-01

    The Feynman path integral is used to quantize the symplectic leaves of the Poisson-Lie group SU(2)*. In this way we obtain the unitary representations of Uq(su(2)). This is achieved by finding explicit Darboux coordinates and then using a phase space path integral. I discuss the *-structure of SU(2)* and give a detailed description of its leaves using various parameterizations and also compare the results with the path integral quantization of spin

  18. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  19. Poisson simulation for high voltage terminal of test stand for 1MV electrostatic accelerator

    International Nuclear Information System (INIS)

    Park, Sae-Hoon; Kim, Jeong-Tae; Kwon, Hyeok-Jung; Cho, Yong-Sub; Kim, Yu-Seok

    2014-01-01

    KOMAC provide ion beam to user which energy range need to expand to MeV range and develop 1 MV electrostatic accelerator. The specifications of the electrostatic accelerator are 1MV acceleration voltage, 10 mA peak current and variable gas ion. We are developing test stand before set up 1 MV electrostatic accelerator. The test stand voltage is 300 kV and operating time is 8 hours. The test stand is consist of 300 kV high voltage terminal, DC-AC-DC inverter, power supply device inside terminal, 200MHz RF power, 5 kV extraction power supply, 300 kV accelerating tube and vacuum system.. The beam measurement system and beam dump will be installed next to accelerating tube. Poisson code simulation results of the high voltage terminal are presented in this paper. Poisson code has been used to calculate the electric field for high voltage terminal. The results of simulation were verified with reasonable results. The poisson code structure could be apply to the high voltage terminal of the test stand

  20. Multi-parameter full waveform inversion using Poisson

    KAUST Repository

    Oh, Juwon

    2016-07-21

    In multi-parameter full waveform inversion (FWI), the success of recovering each parameter is dependent on characteristics of the partial derivative wavefields (or virtual sources), which differ according to parameterisation. Elastic FWIs based on the two conventional parameterisations (one uses Lame constants and density; the other employs P- and S-wave velocities and density) have low resolution of gradients for P-wave velocities (or ). Limitations occur because the virtual sources for P-wave velocity or (one of the Lame constants) are related only to P-P diffracted waves, and generate isotropic explosions, which reduce the spatial resolution of the FWI for these parameters. To increase the spatial resolution, we propose a new parameterisation using P-wave velocity, Poisson\\'s ratio, and density for frequency-domain multi-parameter FWI for isotropic elastic media. By introducing Poisson\\'s ratio instead of S-wave velocity, the virtual source for the P-wave velocity generates P-S and S-S diffracted waves as well as P-P diffracted waves in the partial derivative wavefields for the P-wave velocity. Numerical examples of the cross-triangle-square (CTS) model indicate that the new parameterisation provides highly resolved descent directions for the P-wave velocity. Numerical examples of noise-free and noisy data synthesised for the elastic Marmousi-II model support the fact that the new parameterisation is more robust for noise than the two conventional parameterisations.

  1. On covariant Poisson brackets in classical field theory

    International Nuclear Information System (INIS)

    Forger, Michael; Salles, Mário O.

    2015-01-01

    How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra

  2. Poisson processes on groups and Feynman path integrals

    International Nuclear Information System (INIS)

    Combe, P.; Rodriguez, R.; Sirugue-Collin, M.; Centre National de la Recherche Scientifique, 13 - Marseille; Sirugue, M.

    1979-09-01

    An expression is given for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems

  3. The Quantum Poisson Bracket and Transformation Theory in ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 8. The Quantum Poisson Bracket and Transformation Theory in Quantum Mechanics: Dirac's Early Work in Quantum Theory. Kamal Datta. General Article Volume 8 Issue 8 August 2003 pp 75-85 ...

  4. A high order solver for the unbounded Poisson equation

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe

    2012-01-01

    This work improves upon Hockney and Eastwood's Fourier-based algorithm for the unbounded Poisson equation to formally achieve arbitrary high order of convergence without any additional computational cost. We assess the methodology on the kinematic relations between the velocity and vorticity fields....

  5. Coefficient Inverse Problem for Poisson's Equation in a Cylinder

    NARCIS (Netherlands)

    Solov'ev, V. V.

    2011-01-01

    The inverse problem of determining the coefficient on the right-hand side of Poisson's equation in a cylindrical domain is considered. The Dirichlet boundary value problem is studied. Two types of additional information (overdetermination) can be specified: (i) the trace of the solution to the

  6. Is it safe to use Poisson statistics in nuclear spectrometry?

    International Nuclear Information System (INIS)

    Pomme, S.; Robouch, P.; Arana, G.; Eguskiza, M.; Maguregui, M.I.

    2000-01-01

    The boundary conditions in which Poisson statistics can be applied in nuclear spectrometry are investigated. Improved formulas for the uncertainty of nuclear counting with deadtime and pulse pileup are presented. A comparison is made between the expected statistical uncertainty for loss-free counting, fixed live-time and fixed real-time measurements. (author)

  7. Nambu-Poisson reformulation of the finite dimensional dynamical systems

    International Nuclear Information System (INIS)

    Baleanu, D.; Makhaldiani, N.

    1998-01-01

    A system of nonlinear ordinary differential equations which in a particular case reduces to Volterra's system is introduced. We found in two simplest cases the complete sets of the integrals of motion using Nambu-Poisson reformulation of the Hamiltonian dynamics. In these cases we have solved the systems by quadratures

  8. A Poisson type formula for Hardy classes on Heisenberg's group

    Directory of Open Access Journals (Sweden)

    Lopushansky O.V.

    2010-06-01

    Full Text Available The Hardy type class of complex functions with infinite many variables defined on the Schrodinger irreducible unitary orbit of reduced Heisenberg group, generated by the Gauss density, is investigated. A Poisson integral type formula for their analytic extensions on an open ball is established. Taylor coefficients for analytic extensions are described by the associatedsymmetric Fock space.

  9. Subsonic Flow for the Multidimensional Euler-Poisson System

    Science.gov (United States)

    Bae, Myoungjean; Duan, Ben; Xie, Chunjing

    2016-04-01

    We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.

  10. Poisson-generalized gamma empirical Bayes model for disease ...

    African Journals Online (AJOL)

    In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ...

  11. Inhibition in speed and concentration tests: The Poisson inhibition model

    NARCIS (Netherlands)

    Smit, J.C.; Ven, A.H.G.S. van der

    1995-01-01

    A new model is presented to account for the reaction time fluctuations in concentration tests. The model is a natural generalization of an earlier model, the so-called Poisson-Erlang model, published by Pieters & van der Ven (1982). First, a description is given of the type of tasks for which the

  12. Boundary singularity of Poisson and harmonic Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2015-01-01

    Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170

  13. Characterization and global analysis of a family of Poisson structures

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Bermejo, Benito [Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Universidad Rey Juan Carlos, Calle Tulipan S/N, 28933 (Mostoles), Madrid (Spain)]. E-mail: benito.hernandez@urjc.es

    2006-06-26

    A three-dimensional family of solutions of the Jacobi equations for Poisson systems is characterized. In spite of its general form it is possible the explicit and global determination of its main features, such as the symplectic structure and the construction of the Darboux canonical form. Examples are given.

  14. Comparison between two bivariate Poisson distributions through the ...

    African Journals Online (AJOL)

    To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive. In this paper, we show that these models are nearly everywhere asymptotically equal. From this survey that the ø-divergence converges toward zero, both models are ...

  15. On covariant Poisson brackets in classical field theory

    Energy Technology Data Exchange (ETDEWEB)

    Forger, Michael [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Salles, Mário O. [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Centro de Ciências Exatas e da Terra, Universidade Federal do Rio Grande do Norte, Campus Universitário – Lagoa Nova, BR–59078-970 Natal, RN (Brazil)

    2015-10-15

    How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.

  16. Electroneutral models for dynamic Poisson-Nernst-Planck systems

    Science.gov (United States)

    Song, Zilong; Cao, Xiulei; Huang, Huaxiong

    2018-01-01

    The Poisson-Nernst-Planck (PNP) system is a standard model for describing ion transport. In many applications, e.g., ions in biological tissues, the presence of thin boundary layers poses both modeling and computational challenges. In this paper, we derive simplified electroneutral (EN) models where the thin boundary layers are replaced by effective boundary conditions. There are two major advantages of EN models. First, it is much cheaper to solve them numerically. Second, EN models are easier to deal with compared to the original PNP system; therefore, it would also be easier to derive macroscopic models for cellular structures using EN models. Even though the approach used here is applicable to higher-dimensional cases, this paper mainly focuses on the one-dimensional system, including the general multi-ion case. Using systematic asymptotic analysis, we derive a variety of effective boundary conditions directly applicable to the EN system for the bulk region. This EN system can be solved directly and efficiently without computing the solution in the boundary layer. The derivation is based on matched asymptotics, and the key idea is to bring back higher-order contributions into the effective boundary conditions. For Dirichlet boundary conditions, the higher-order terms can be neglected and the classical results (continuity of electrochemical potential) are recovered. For flux boundary conditions, higher-order terms account for the accumulation of ions in boundary layer and neglecting them leads to physically incorrect solutions. To validate the EN model, numerical computations are carried out for several examples. Our results show that solving the EN model is much more efficient than the original PNP system. Implemented with the Hodgkin-Huxley model, the computational time for solving the EN model is significantly reduced without sacrificing the accuracy of the solution due to the fact that it allows for relatively large mesh and time-step sizes.

  17. A Poisson-Fault Model for Testing Power Transformers in Service

    Directory of Open Access Journals (Sweden)

    Dengfu Zhao

    2014-01-01

    Full Text Available This paper presents a method for assessing the instant failure rate of a power transformer under different working conditions. The method can be applied to a dataset of a power transformer under periodic inspections and maintenance. We use a Poisson-fault model to describe failures of a power transformer. When investigating a Bayes estimate of the instant failure rate under the model, we find that complexities of a classical method and a Monte Carlo simulation are unacceptable. Through establishing a new filtered estimate of Poisson process observations, we propose a quick algorithm of the Bayes estimate of the instant failure rate. The proposed algorithm is tested by simulation datasets of a power transformer. For these datasets, the proposed estimators of parameters of the model have better performance than other estimators. The simulation results reveal the suggested algorithms are quickest among three candidates.

  18. A finite element Poisson solver for gyrokinetic particle simulations in a global field aligned mesh

    International Nuclear Information System (INIS)

    Nishimura, Y.; Lin, Z.; Lewandowski, J.L.V.; Ethier, S.

    2006-01-01

    A new finite element Poisson solver is developed and applied to a global gyrokinetic toroidal code (GTC) which employs the field aligned mesh and thus a logically non-rectangular grid in a general geometry. Employing test cases where the analytical solutions are known, the finite element solver has been verified. The CPU time scaling versus the matrix size employing portable, extensible toolkit for scientific computation (PETSc) to solve the sparse matrix is promising. Taking the ion temperature gradient modes (ITG) as an example, the solution from the new finite element solver has been compared to the solution from the original GTC's iterative solver which is only efficient for adiabatic electrons. Linear and nonlinear simulation results from the two different forms of the gyrokinetic Poisson equation (integral form and the differential form) coincide each other. The new finite element solver enables the implementation of advanced kinetic electron models for global electromagnetic simulations

  19. Poisson and negative binomial item count techniques for surveys with sensitive question.

    Science.gov (United States)

    Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin

    2017-04-01

    Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.

  20. Numerical solution of continuous-time DSGE models under Poisson uncertainty

    DEFF Research Database (Denmark)

    Posch, Olaf; Trimborn, Timo

    We propose a simple and powerful method for determining the transition process in continuous-time DSGE models under Poisson uncertainty numerically. The idea is to transform the system of stochastic differential equations into a system of functional differential equations of the retarded type. We...... then use the Waveform Relaxation algorithm to provide a guess of the policy function and solve the resulting system of ordinary differential equations by standard methods and fix-point iteration. Analytical solutions are provided as a benchmark from which our numerical method can be used to explore broader...... classes of models. We illustrate the algorithm simulating both the stochastic neoclassical growth model and the Lucas model under Poisson uncertainty which is motivated by the Barro-Rietz rare disaster hypothesis. We find that, even for non-linear policy functions, the maximum (absolute) error is very...

  1. Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    uma srivastava

    2012-01-01

    Full Text Available The paper deals with estimating  shift point which occurs in any sequence of independent observations  of Poisson model in statistical process control. This shift point occurs in the sequence when  i.e. m  life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with  R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .

  2. Semiconductor device simulation by a new method of solving poisson, Laplace and Schrodinger equations

    International Nuclear Information System (INIS)

    Sharifi, M. J.; Adibi, A.

    2000-01-01

    In this paper, we have extended and completed our previous work, that was introducing a new method for finite differentiation. We show the applicability of the method for solving a wide variety of equations such as poisson, Laplace and Schrodinger. These equations are fundamental to the most semiconductor device simulators. In a section, we solve the Shordinger equation by this method in several cases including the problem of finding electron concentration profile in the channel of a HEMT. In another section, we solve the Poisson equation by this method, choosing the problem of SBD as an example. Finally we solve the Laplace equation in two dimensions and as an example, we focus on the VED. In this paper, we have shown that, the method can get stable and precise results in solving all of these problems. Also the programs which have been written based on this method become considerably faster, more clear, and more abstract

  3. Stochastic Interest Model Based on Compound Poisson Process and Applications in Actuarial Science

    Directory of Open Access Journals (Sweden)

    Shilong Li

    2017-01-01

    Full Text Available Considering stochastic behavior of interest rates in financial market, we construct a new class of interest models based on compound Poisson process. Different from the references, this paper describes the randomness of interest rates by modeling the force of interest with Poisson random jumps directly. To solve the problem in calculation of accumulated interest force function, one important integral technique is employed. And a conception called the critical value is introduced to investigate the validity condition of this new model. We also discuss actuarial present values of several life annuities under this new interest model. Simulations are done to illustrate the theoretical results and the effect of parameters in interest model on actuarial present values is also analyzed.

  4. Dynamics of a prey-predator system under Poisson white noise excitation

    Science.gov (United States)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  5. Comparison of density functional and modified Poisson-Boltzmann structural properties for a spherical double layer

    Directory of Open Access Journals (Sweden)

    L.B.Bhuiyan

    2005-01-01

    Full Text Available The density functional and modified Poisson-Boltzmann descriptions of a spherical (electric double layer are compared and contrasted vis-a-vis existing Monte Carlo simulation data (for small ion diameter 4.25·10-10 m from the literature for a range of physical parameters such as macroion surface charge, macroion radius, valencies of the small ions, and electrolyte concentration. Overall, the theoretical predictions are seen to be remarkably consistent between themselves, being also in very good agreement with the simulations. Some modified Poisson-Boltzmann results for the zeta potential at small ion diameters of 3 and 2·10-10 m are also reported.

  6. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    Energy Technology Data Exchange (ETDEWEB)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  7. Screened Poisson Equation for Image Contrast Enhancement

    Directory of Open Access Journals (Sweden)

    Jean-Michel Morel

    2014-03-01

    Full Text Available In this work we propose a discussion and detailed implementation of a very simple gradient domain method that tries to eliminate the effect of nonuniform illumination and at the same time preserves the images details. This model, which to the best of our knowledge has not been explored in spite of its simplicity, acts as a high pass filter. We show that with a single contrast parameter (which keeps the same value in most experiments, the model delivers state of the art results. They compare favorably to results obtained with more complex algorithms. Our algorithm is designed for all kinds of images, but with the special specification of making minimal image detail alteration thanks to a first order fidelity term, instead of the usual zero order term. Experiments on non-uniform medical images and on hazy images illustrate significant perception gain.

  8. Poisson traces, D-modules, and symplectic resolutions

    Science.gov (United States)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  9. Poisson structure of dynamical systems with three degrees of freedom

    Science.gov (United States)

    Gümral, Hasan; Nutku, Yavuz

    1993-12-01

    It is shown that the Poisson structure of dynamical systems with three degrees of freedom can be defined in terms of an integrable one-form in three dimensions. Advantage is taken of this fact and the theory of foliations is used in discussing the geometrical structure underlying complete and partial integrability. Techniques for finding Poisson structures are presented and applied to various examples such as the Halphen system which has been studied as the two-monopole problem by Atiyah and Hitchin. It is shown that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a nontrivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of three-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the SL(2,R) structure is a quadratic unfolding of an integrable one-form in 3+1 dimensions. It is shown that the existence of a vector field compatible with the flow is a powerful tool in the investigation of Poisson structure and some new techniques for incorporating arbitrary constants into the Poisson one-form are presented herein. This leads to some extensions, analogous to q extensions, of Poisson structure. The Kermack-McKendrick model and some of its generalizations describing the spread of epidemics, as well as the integrable cases of the Lorenz, Lotka-Volterra, May-Leonard, and Maxwell-Bloch systems admit globally integrable bi-Hamiltonian structure.

  10. Influence of Poisson's ratio variation on lateral spring constant of atomic force microscopy cantilevers

    International Nuclear Information System (INIS)

    Yeh, M.-K.; Tai, N.-Ha; Chen, B.-Y.

    2008-01-01

    Atomic force microscopy (AFM) can be used to measure the surface morphologies and the mechanical properties of nanostructures. The force acting on the AFM cantilever can be obtained by multiplying the spring constant of AFM cantilever and the corresponding deformation. To improve the accuracy of force experiments, the spring constant of AFM cantilever must be calibrated carefully. Many methods, such as theoretical equations, the finite element method, and the use of reference cantilever, were reported to obtain the spring constant of AFM cantilevers. For the cantilever made of single crystal, the Poisson's ratio varies with different cantilever-crystal angles. In this paper, the influences of Poisson's ratio variation on the lateral spring constant and axial spring constant of rectangular and V-shaped AFM cantilevers, with different tilt angles and normal forces, were investigated by the finite element analysis. When the cantilever's tilt angle is 20 deg. and the Poisson's ratio varies from 0.02 to 0.4, the finite element results show that the lateral spring constants decrease 11.75% for the rectangular cantilever with 1 μN landing force and decrease 18.60% for the V-shaped cantilever with 50 nN landing force, respectively. The influence of Poisson's ratio variation on axial spring constant is less than 3% for both rectangular and V-shaped cantilevers. As the tilt angle increases, the axial spring constants for rectangular and V-shaped cantilevers decrease substantially. The results obtained can be used to improve the accuracy of the lateral force measurement when using atomic force microscopy

  11. Pricing Zero-Coupon Catastrophe Bonds Using EVT with Doubly Stochastic Poisson Arrivals

    Directory of Open Access Journals (Sweden)

    Zonggang Ma

    2017-01-01

    Full Text Available The frequency and severity of climate abnormal change displays an irregular upward cycle as global warming intensifies. Therefore, this paper employs a doubly stochastic Poisson process with Black Derman Toy (BDT intensity to describe the catastrophic characteristics. By using the Property Claim Services (PCS loss index data from 2001 to 2010 provided by the US Insurance Services Office (ISO, the empirical result reveals that the BDT arrival rate process is superior to the nonhomogeneous Poisson and lognormal intensity process due to its smaller RMSE, MAE, MRPE, and U and larger E and d. Secondly, to depict extreme features of catastrophic risks, this paper adopts the Peak Over Threshold (POT in extreme value theory (EVT to characterize the tail characteristics of catastrophic loss distribution. And then the loss distribution is analyzed and assessed using a quantile-quantile (QQ plot to visually check whether the PCS index observations meet the generalized Pareto distribution (GPD assumption. Furthermore, this paper derives a pricing formula for zero-coupon catastrophe bonds with a stochastic interest rate environment and aggregate losses generated by a compound doubly stochastic Poisson process under the forward measure. Finally, simulation results verify pricing model predictions and show how catastrophic risks and interest rate risk affect the prices of zero-coupon catastrophe bonds.

  12. Stochastic Dynamics of a Time-Delayed Ecosystem Driven by Poisson White Noise Excitation

    Directory of Open Access Journals (Sweden)

    Wantao Jia

    2018-02-01

    Full Text Available We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson white noise. The stochastic averaging method and the perturbation method are applied to calculate the approximate stationary probability density functions for both predator and prey populations. The influences of system parameters and the Poisson white noises are investigated in detail based on the approximate stationary probability density functions. It is found that, increasing time delay parameter as well as the mean arrival rate and the variance of the amplitude of the Poisson white noise will enhance the fluctuations of the prey and predator population. While the larger value of self-competition parameter will reduce the fluctuation of the system. Furthermore, the results from Monte Carlo simulation are also obtained to show the effectiveness of the results from averaging method.

  13. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  14. Model building in nonproportional hazard regression.

    Science.gov (United States)

    Rodríguez-Girondo, Mar; Kneib, Thomas; Cadarso-Suárez, Carmen; Abu-Assi, Emad

    2013-12-30

    Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  16. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  17. Transient finite element analysis of electric double layer using Nernst-Planck-Poisson equations with a modified Stern layer.

    Science.gov (United States)

    Lim, Jongil; Whitcomb, John; Boyd, James; Varghese, Julian

    2007-01-01

    A finite element implementation of the transient nonlinear Nernst-Planck-Poisson (NPP) and Nernst-Planck-Poisson-modified Stern (NPPMS) models is presented. The NPPMS model uses multipoint constraints to account for finite ion size, resulting in realistic ion concentrations even at high surface potential. The Poisson-Boltzmann equation is used to provide a limited check of the transient models for low surface potential and dilute bulk solutions. The effects of the surface potential and bulk molarity on the electric potential and ion concentrations as functions of space and time are studied. The ability of the models to predict realistic energy storage capacity is investigated. The predicted energy is much more sensitive to surface potential than to bulk solution molarity.

  18. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  19. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    International Nuclear Information System (INIS)

    Pan, Zhao; Thomson, Scott; Whitehead, Jared; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. (paper)

  20. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587

  1. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    Science.gov (United States)

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  2. Extremal Properties of an Intermittent Poisson Process Generating 1/f Noise

    Science.gov (United States)

    Grüneis, Ferdinand

    2016-08-01

    It is well-known that the total power of a signal exhibiting a pure 1/f shape is divergent. This phenomenon is also called the infrared catastrophe. Mandelbrot claims that the infrared catastrophe can be overcome by stochastic processes which alternate between active and quiescent states. We investigate an intermittent Poisson process (IPP) which belongs to the family of stochastic processes suggested by Mandelbrot. During the intermission δ (quiescent period) the signal is zero. The active period is divided into random intervals of mean length τ0 consisting of a fluctuating number of events; this is giving rise to so-called clusters. The advantage of our treatment is that the spectral features of the IPP can be derived analytically. Our considerations are focused on the case that intermission is only a small disturbance of the Poisson process, i.e., to the case that δ ≤ τ0. This makes it difficult or even impossible to discriminate a spike train of such an IPP from that of a Poisson process. We investigate the conditions under which a 1/f spectrum can be observed. It is shown that 1/f noise generated by the IPP is accompanied with extreme variance. In agreement with the considerations of Mandelbrot, the IPP avoids the infrared catastrophe. Spectral analysis of the simulated IPP confirms our theoretical results. The IPP is a model for an almost random walk generating both white and 1/f noise and can be applied for an interpretation of 1/f noise in metallic resistors.

  3. Ridge regression revisited

    NARCIS (Netherlands)

    P.M.C. de Boer (Paul); C.M. Hafner (Christian)

    2005-01-01

    textabstractWe argue in this paper that general ridge (GR) regression implies no major complication compared with simple ridge regression. We introduce a generalization of an explicit GR estimator derived by Hemmerle and by Teekens and de Boer and show that this estimator, which is more

  4. 2D sigma models and differential Poisson algebras

    Energy Technology Data Exchange (ETDEWEB)

    Arias, Cesar [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Boulanger, Nicolas [Service de Mécanique et Gravitation, Université de Mons - UMONS,20 Place du Parc, 7000 Mons (Belgium); Laboratoire de Mathématiques et Physique Théorique,Unité Mixte de Recherche 7350 du CNRS, Fédération de Recherche 2964 Denis Poisson,Université François Rabelais, Parc de Grandmont, 37200 Tours (France); Sundell, Per [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Torres-Gomez, Alexander [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Instituto de Ciencias Físicas y Matemáticas, Universidad Austral de Chile-UACh,Valdivia (Chile)

    2015-08-18

    We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to a worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model.

  5. Invariants and labels for Lie-Poisson Systems

    International Nuclear Information System (INIS)

    Thiffeault, J.L.; Morrison, P.J.

    1998-04-01

    Reduction is a process that uses symmetry to lower the order of a Hamiltonian system. The new variables in the reduced picture are often not canonical: there are no clear variables representing positions and momenta, and the Poisson bracket obtained is not of the canonical type. Specifically, we give two examples that give rise to brackets of the noncanonical Lie-Poisson form: the rigid body and the two-dimensional ideal fluid. From these simple cases, we then use the semidirect product extension of algebras to describe more complex physical systems. The Casimir invariants in these systems are examined, and some are shown to be linked to the recovery of information about the configuration of the system. We discuss a case in which the extension is not a semidirect product, namely compressible reduced MHD, and find for this case that the Casimir invariants lend partial information about the configuration of the system

  6. Reference manual for the POISSON/SUPERFISH Group of Codes

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.

  7. Bering's proposal for boundary contribution to the Poisson bracket

    International Nuclear Information System (INIS)

    Soloviev, V.O.

    1998-11-01

    It is shown that the Poisson bracket with boundary terms recently proposed by Bering can be deduced from the Poisson bracket proposed by the present author if one omits terms free of Euler-Lagrange derivatives (''annihilation principle''). This corresponds to another definition of the formal product of distributions (or, saying it in other words, to another definition of the pairing between 1-forms and 1-vectors in the formal variational calculus). We extend the formula initially suggested by Bering only for the ultralocal case with constant coefficients onto the general non-ultralocal brackets with coefficients depending on fields and their spatial derivatives. The lack of invariance under changes of dependent variables (field redefinitions) seems a drawback of this proposal. (author)

  8. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  9. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  10. Investigation of Random Switching Driven by a Poisson Point Process

    DEFF Research Database (Denmark)

    Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef

    2015-01-01

    This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly....

  11. Estimating small signals by using maximum likelihood and Poisson statistics

    CERN Document Server

    Hannam, M D

    1999-01-01

    Estimation of small signals from counting experiments with backgrounds larger than signals is solved using maximum likelihood estimation for situations in which both signal and background statistics are Poissonian. Confidence levels are discussed, and Poisson, Gauss and least-squares fitting methods are compared. Efficient algorithms that estimate signal strengths and confidence levels are devised for computer implementation. Examples from simulated data and a low count rate experiment in nuclear physics are given. (author)

  12. Events in time: Basic analysis of Poisson data

    Energy Technology Data Exchange (ETDEWEB)

    Engelhardt, M.E.

    1994-09-01

    The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.

  13. Zero-Inflated Poisson Modeling of Fall Risk Factors in Community-Dwelling Older Adults.

    Science.gov (United States)

    Jung, Dukyoo; Kang, Younhee; Kim, Mi Young; Ma, Rye-Won; Bhandari, Pratibha

    2016-02-01

    The aim of this study was to identify risk factors for falls among community-dwelling older adults. The study used a cross-sectional descriptive design. Self-report questionnaires were used to collect data from 658 community-dwelling older adults and were analyzed using logistic and zero-inflated Poisson (ZIP) regression. Perceived health status was a significant factor in the count model, and fall efficacy emerged as a significant predictor in the logistic models. The findings suggest that fall efficacy is important for predicting not only faller and nonfaller status but also fall counts in older adults who may or may not have experienced a previous fall. The fall predictors identified in this study--perceived health status and fall efficacy--indicate the need for fall-prevention programs tailored to address both the physical and psychological issues unique to older adults. © The Author(s) 2014.

  14. Rgbp: An R Package for Gaussian, Poisson, and Binomial Random Effects Models with Frequency Coverage Evaluations

    Directory of Open Access Journals (Sweden)

    Hyungsuk Tak

    2017-06-01

    Full Text Available Rgbp is an R package that provides estimates and verifiable confidence intervals for random effects in two-level conjugate hierarchical models for overdispersed Gaussian, Poisson, and binomial data. Rgbp models aggregate data from k independent groups summarized by observed sufficient statistics for each random effect, such as sample means, possibly with covariates. Rgbp uses approximate Bayesian machinery with unique improper priors for the hyper-parameters, which leads to good repeated sampling coverage properties for random effects. A special feature of Rgbp is an option that generates synthetic data sets to check whether the interval estimates for random effects actually meet the nominal confidence levels. Additionally, Rgbp provides inference statistics for the hyper-parameters, e.g., regression coefficients.

  15. Brain, music, and non-Poisson renewal processes

    Science.gov (United States)

    Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo

    2007-06-01

    In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5music composition yield μmusic on the human brain.

  16. Optimal smoothing of poisson degraded nuclear medicine image data

    International Nuclear Information System (INIS)

    Hull, D.M.

    1985-01-01

    The development of a method that removes Poisson noise from nuclear medicine studies will have significant impact on the quantitative analysis and clinical reliability of these data. The primary objective of the work described in this thesis was to develop a linear, non-stationary optimal filter to reduce Poisson noise. The derived filter is automatically calculated from a large group (library) of similar patient studies representing all similarly acquired studies (the ensemble). The filter design was evaluated under controlled conditions using two computer simulated ensembles, devised to represent selected properties of real patient gated blood pool studies. Fortran programs were developed to generate libraries of Poisson degraded simulated studies for each ensemble. These libraries then were used to estimate optimal filters specific to the ensemble. Libraries of previously acquired patient gated blood pool studies then were used to estimate the optimal filters for an ensemble of similarly acquired gated blood pool studies. These filters were applied to studies of 13 patients who received multiple repeat studies at one time. Comparisons of both the filtered and raw data to averages of the repeat studies demonstrated that the optimal filters, calculated from a library of 800 studies, reduce the mean square error in the patient data by 60%. It is expected that optimally filtered gated blood pool studies will improve quantitative analysis of the data

  17. Blind beam-hardening correction from Poisson measurements

    Science.gov (United States)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  18. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  19. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  20. Impacts of floods on dysentery in Xinxiang city, China, during 2004–2010: a time-series Poisson analysis

    Science.gov (United States)

    Ni, Wei; Ding, Guoyong; Li, Yifei; Li, Hongkai; Jiang, Baofa

    2014-01-01

    Background Xinxiang, a city in Henan Province, suffered from frequent floods due to persistent and heavy precipitation from 2004 to 2010. In the same period, dysentery was a common public health problem in Xinxiang, with the proportion of reported cases being the third highest among all the notified infectious diseases. Objectives We focused on dysentery disease consequences of different degrees of floods and examined the association between floods and the morbidity of dysentery on the basis of longitudinal data during the study period. Design A time-series Poisson regression model was conducted to examine the relationship between 10 times different degrees of floods and the monthly morbidity of dysentery from 2004 to 2010 in Xinxiang. Relative risks (RRs) of moderate and severe floods on the morbidity of dysentery were calculated in this paper. In addition, we estimated the attributable contributions of moderate and severe floods to the morbidity of dysentery. Results A total of 7591 cases of dysentery were notified in Xinxiang during the study period. The effect of floods on dysentery was shown with a 0-month lag. Regression analysis showed that the risk of moderate and severe floods on the morbidity of dysentery was 1.55 (95% CI: 1.42–1.670) and 1.74 (95% CI: 1.56–1.94), respectively. The attributable risk proportions (ARPs) of moderate and severe floods to the morbidity of dysentery were 35.53 and 42.48%, respectively. Conclusions This study confirms that floods have significantly increased the risk of dysentery in the study area. In addition, severe floods have a higher proportional contribution to the morbidity of dysentery than moderate floods. Public health action should be taken to avoid and control a potential risk of dysentery epidemics after floods. PMID:25098726

  1. Two-part zero-inflated negative binomial regression model for quantitative trait loci mapping with count trait.

    Science.gov (United States)

    Moghimbeigi, Abbas

    2015-05-07

    Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Multilevel Methods for the Poisson-Boltzmann Equation

    Science.gov (United States)

    Holst, Michael Jay

    We consider the numerical solution of the Poisson -Boltzmann equation (PBE), a three-dimensional second order nonlinear elliptic partial differential equation arising in biophysics. This problem has several interesting features impacting numerical algorithms, including discontinuous coefficients representing material interfaces, rapid nonlinearities, and three spatial dimensions. Similar equations occur in various applications, including nuclear physics, semiconductor physics, population genetics, astrophysics, and combustion. In this thesis, we study the PBE, discretizations, and develop multilevel-based methods for approximating the solutions of these types of equations. We first outline the physical model and derive the PBE, which describes the electrostatic potential of a large complex biomolecule lying in a solvent. We next study the theoretical properties of the linearized and nonlinear PBE using standard function space methods; since this equation has not been previously studied theoretically, we provide existence and uniqueness proofs in both the linearized and nonlinear cases. We also analyze box-method discretizations of the PBE, establishing several properties of the discrete equations which are produced. In particular, we show that the discrete nonlinear problem is well-posed. We study and develop linear multilevel methods for interface problems, based on algebraic enforcement of Galerkin or variational conditions, and on coefficient averaging procedures. Using a stencil calculus, we show that in certain simplified cases the two approaches are equivalent, with different averaging procedures corresponding to different prolongation operators. We also develop methods for nonlinear problems based on a nonlinear multilevel method, and on linear multilevel methods combined with a globally convergent damped-inexact-Newton method. We derive a necessary and sufficient descent condition for the inexact-Newton direction, enabling the development of extremely

  3. Multiple Linear Regression

    Science.gov (United States)

    Grégoire, G.

    2014-12-01

    This chapter deals with the multiple linear regression. That is we investigate the situation where the mean of a variable depends linearly on a set of covariables. The noise is supposed to be gaussian. We develop the least squared method to get the parameter estimators and estimates of their precisions. This leads to design confidence intervals, prediction intervals, global tests, individual tests and more generally tests of submodels defined by linear constraints. Methods for model's choice and variables selection, measures of the quality of the fit, residuals study, diagnostic methods are presented. Finally identification of departures from the model's assumptions and the way to deal with these problems are addressed. A real data set is used to illustrate the methodology with software R. Note that this chapter is intended to serve as a guide for other regression methods, like logistic regression or AFT models and Cox regression.

  4. Glyph: Symbolic Regression Tools

    OpenAIRE

    Quade, Markus; Gout, Julien; Abel, Markus

    2018-01-01

    We present Glyph - a Python package for genetic programming based symbolic regression. Glyph is designed for usage let by numerical simulations let by real world experiments. For experimentalists, glyph-remote provides a separation of tasks: a ZeroMQ interface splits the genetic programming optimization task from the evaluation of an experimental (or numerical) run. Glyph can be accessed at http://github.com/ambrosys/glyph . Domain experts are be able to employ symbolic regression in their ex...

  5. Action-angle variables and a KAM theorem for b-Poisson manifolds

    OpenAIRE

    Kiesenhofer, Anna; Miranda Galcerán, Eva; Scott, Geoffrey

    2015-01-01

    In this article we prove an action-angle theorem for b-integrable systems on b-Poisson manifolds improving the action-angle theorem contained in [14] for general Poisson manifolds in this setting. As an application, we prove a KAM-type theorem for b-Poisson manifolds. (C) 2015 Elsevier Masson SAS. All rights reserved.

  6. A Raikov-Type Theorem for Radial Poisson Distributions: A Proof of Kingman's Conjecture

    OpenAIRE

    Van Nguyen, Thu

    2011-01-01

    In the present paper we prove the following conjecture in Kingman, J.F.C., Random walks with spherical symmetry, Acta Math.,109, (1963), 11-53. concerning a famous Raikov's theorem of decomposition of Poisson random variables: "If a radial sum of two independent random variables X and Y is radial Poisson, then each of them must be radial Poisson."

  7. A comparison of Poisson-one-inflated power series distributions for ...

    African Journals Online (AJOL)

    A class of Poisson-one-inflated power series distributions (the binomial, the Poisson, the negative binomial, the geometric, the log-series and the misrecorded Poisson) are proposed for modeling rural out-migration at the household level. The probability mass functions of the mixture distributions are derived and fitted to the ...

  8. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  9. Poisson's ratio analysis (Vp/Vs) on volcanoes and geothermal potential areas in Central Java using tomography travel time method of grid search relocation hypocenter

    International Nuclear Information System (INIS)

    Raharjo, W.; Palupi, I. R.; Nurdian, S. W.; Giamboro, W. S.; Soesilo, J.

    2016-01-01

    Poisson's Ratio illustrates the elasticity properties of a rock. The value is affected by the ratio between the value of P and S wave velocity, where the high value ratio associated with partial melting while the low associated with gas saturated rock. Java which has many volcanoes as a result of the collision between the Australian and Eurasian plates also effects of earthquakes that result the P and S wave. By tomography techniques the distribution of the value of Poisson's ratio can be known. Western Java was dominated by high Poisson's Ratio until Mount Slamet and Dieng in Central Java, while the eastern part of Java is dominated by low Poisson's Ratio. The difference of Poisson's Ratio is located in Central Java that is also supported by the difference characteristic of hot water manifestation in geothermal potential area in the west and east of Central Java Province. Poisson's ratio value is also lower with increasing depth proving that the cold oceanic plate entrance under the continental plate. (paper)

  10. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  11. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  12. Practical Session: Logistic Regression

    Science.gov (United States)

    Clausel, M.; Grégoire, G.

    2014-12-01

    An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

  13. Bases chimiosensorielles du comportement alimentaire chez les poissons

    Directory of Open Access Journals (Sweden)

    SAGLIO Ph.

    1981-07-01

    Full Text Available Le comportement alimentaire, indispensable à la survie de l'individu et donc de l'espèce, occupe à ce titre une position de première importance dans la hiérarchie des comportements fondamentaux qui tous en dépendent très étroitement. Chez les poissons, cette prééminence se trouve illustrée par l'extrême diversité des supports sensoriels impliqués et des expressions comportementales qui leur sont liées. A la suite d'un certain nombre de mises en évidence neurophysiologiques et éthologiques de l'importance du sens chimique (olfaction, gustation dans le comportement alimentaire des poissons, de très importants secteurs d'études électrophysiologiques et d'analyses physico-chimiques visant à en déterminer la nature exacte (en termes de substances actives se sont développés ces vingt dernières années. De tous ces travaux dont les plus avancés sont présentés ici, il ressort que les acides aminés de série L plus ou moins associés à d'autres composés de poids moléculaires < 1000 constituent des composés chimiques jouant un rôle déterminant dans le comportement alimentaire de nombreuses espèces de poissons carnivores.

  14. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'

    DEFF Research Database (Denmark)

    de Nijs, Robin

    2015-01-01

    by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...... for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties...

  15. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    Science.gov (United States)

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  16. A Finite Element Procedure with Poisson Iteration Method Adopting Pattern Approach Technique for Near-Incompressible Rubber Problems

    Directory of Open Access Journals (Sweden)

    Young-Doo Kwon

    2014-08-01

    Full Text Available A finite element procedure is presented for the analysis of rubber-like hyperelastic materials. The volumetric incompressibility condition of rubber deformation is included in the formulation using the penalty method, while the principle of virtual work is used to derive a nonlinear finite element equation for the large displacement problem that is presented in a total-Lagrangian description. The behavior of rubber deformation is represented by hyperelastic constitutive relations based on a generalized Mooney-Rivlin model. The proposed finite element procedure using analytic differentiation exhibited results that matched very well with those from the well-known commercial packages NISA II and ABAQUS. Furthermore, the convergence of equilibrium iteration is quite slow or frequently fails in the case of near-incompressible rubber. To prevent such phenomenon even for the case that Poisson's ratio is very close to 0.5, Poisson's ratio of 0.49000 is used, first, to get an approximate solution without any difficulty; then the applied load is maintained and Poisson's ratio is increased to 0.49999 following a proposed pattern and adopting a technique of relaxation by monitoring the convergence rate. For a given Poisson ratio near 0.5, with this approach, we could reduce the number of substeps considerably.

  17. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  18. Team behaviour analysis in sports using the poisson equation

    OpenAIRE

    Direkoglu, Cem; O'Connor, Noel E.

    2012-01-01

    We propose a novel physics-based model for analysing team play- ers’ positions and movements on a sports playing field. The goal is to detect for each frame the region with the highest population of a given team’s players and the region towards which the team is moving as they press for territorial advancement, termed the region of intent. Given the positions of team players from a plan view of the playing field at any given time, we solve a particular Poisson equation to generate a smooth di...

  19. An approach to numerically solving the Poisson equation

    Science.gov (United States)

    Feng, Zhichen; Sheng, Zheng-Mao

    2015-06-01

    We introduce an approach for numerically solving the Poisson equation by using a physical model, which is a way to solve a partial differential equation without the finite difference method. This method is especially useful for obtaining the solutions in very many free-charge neutral systems with open boundary conditions. It can be used for arbitrary geometry and mesh style and is more efficient comparing with the widely-used iterative algorithm with multigrid methods. It is especially suitable for parallel computing. This method can also be applied to numerically solving other partial differential equations whose Green functions exist in analytic expression.

  20. Large Time Behavior of the Vlasov-Poisson-Boltzmann System

    Directory of Open Access Journals (Sweden)

    Li Li

    2013-01-01

    Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.

  1. Supersymmetric quantum corrections and Poisson-Lie T-duality

    International Nuclear Information System (INIS)

    Assaoui, F.; Lhallabi, T.; Abdus Salam International Centre for Theoretical Physics, Trieste

    2000-07-01

    The quantum actions of the (4,4) supersymmetric non-linear sigma model and its dual in the Abelian case are constructed by using the background superfield method. The propagators of the quantum superfield and its dual and the gauge fixing actions of the original and dual (4,4) supersymmetric sigma models are determined. On the other hand, the BRST transformations are used to obtain the quantum dual action of the (4,4) supersymmetric nonlinear sigma model in the sense of Poisson-Lie T-duality. (author)

  2. Ruin probabilities for a regenerative Poisson gap generated risk process

    DEFF Research Database (Denmark)

    Asmussen, Søren; Biard, Romain

    . Asymptotic expressions for the infinite horizon ruin probabilities are given both for the light- and the heavy-tailed case. A basic observation is that the process regenerates at each G-claim. Also an approach via Markov additive processes is outlined, and heuristics are given for the distribution of the time......A risk process with constant premium rate c and Poisson arrivals of claims is considered. A threshold r is defined for claim interarrival times, such that if k consecutive interarrival times are larger than r, then the next claim has distribution G. Otherwise, the claim size distribution is F...

  3. Standard Test Method for Determining Poisson's Ratio of Honeycomb Cores

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers the determination of the honeycomb Poisson's ratio from the anticlastic curvature radii, see . 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  4. Maslov indices, Poisson brackets, and singular differential forms

    Science.gov (United States)

    Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.

    2014-06-01

    Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.

  5. Gap processing for adaptive maximal poisson-disk sampling

    KAUST Repository

    Yan, Dongming

    2013-10-17

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  6. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  7. Poisson Stochastic Process and Basic Schauder and Sobolev Estimates in the Theory of Parabolic Equations

    Science.gov (United States)

    Krylov, N. V.; Priola, E.

    2017-09-01

    We show, among other things, how knowing Schauder or Sobolev-space estimates for the one-dimensional heat equation allows one to derive their multidimensional analogs for equations with coefficients depending only on the time variable with the same constants as in the case of the one-dimensional heat equation. The method is quite general and is based on using the Poisson stochastic process. It also applies to equations involving non-local operators. It looks like no other methods are available at this time and it is a very challenging problem to find a purely analytical approach to proving such results.

  8. The Stochastic stability of a Logistic model with Poisson white noise

    International Nuclear Information System (INIS)

    Duan Dong-Hai; Xu Wei; Zhou Bing-Chang; Su Jun

    2011-01-01

    The stochastic stability of a logistic model subjected to the effect of a random natural environment, modeled as Poisson white noise process, is investigated. The properties of the stochastic response are discussed for calculating the Lyapunov exponent, which had proven to be the most useful diagnostic tool for the stability of dynamical systems. The generalised Itô differentiation formula is used to analyse the stochastic stability of the response. The results indicate that the stability of the response is related to the intensity and amplitude distribution of the environment noise and the growth rate of the species. (general)

  9. The Stochastic stability of a Logistic model with Poisson white noise

    Science.gov (United States)

    Duan, Dong-Hai; Xu, Wei; Su, Jun; Zhou, Bing-Chang

    2011-03-01

    The stochastic stability of a logistic model subjected to the effect of a random natural environment, modeled as Poisson white noise process, is investigated. The properties of the stochastic response are discussed for calculating the Lyapunov exponent, which had proven to be the most useful diagnostic tool for the stability of dynamical systems. The generalised Itô differentiation formula is used to analyse the stochastic stability of the response. The results indicate that the stability of the response is related to the intensity and amplitude distribution of the environment noise and the growth rate of the species. Project supported by the National Natural Science Foundation of China (Grant Nos. 10872165 and 10932009).

  10. Mean-square filter design for stochastic polynomial systems with Gaussian and Poisson noises

    Science.gov (United States)

    Basin, Michael; Rodriguez-Ramirez, Pablo

    2014-07-01

    This paper addresses the mean-square finite-dimensional filtering problem for polynomial system states with both, Gaussian and Poisson, white noises over linear observations. A constructive procedure is established to design the mean-square filtering equations for system states described by polynomial equations of an arbitrary finite degree. An explicit closed form of the designed filter is obtained in case of a third-order polynomial system. The theoretical result is complemented with an illustrative example verifying performance of the designed filter.

  11. Software Regression Verification

    Science.gov (United States)

    2013-12-11

    of recursive procedures. Acta Informatica , 45(6):403 – 439, 2008. [GS11] Benny Godlin and Ofer Strichman. Regression verifica- tion. Technical Report...functions. Therefore, we need to rede - fine m-term. – Mutual termination. If either function f or function f ′ (or both) is non- deterministic, then their

  12. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  13. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  14. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...

  15. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  16. Application of a Weighted Regression Model for Reporting Nutrient and Sediment Concentrations, Fluxes, and Trends in Concentration and Flux for the Chesapeake Bay Nontidal Water-Quality Monitoring Network, Results Through Water Year 2012

    Science.gov (United States)

    Chanat, Jeffrey G.; Moyer, Douglas L.; Blomquist, Joel D.; Hyer, Kenneth E.; Langland, Michael J.

    2016-01-13

    In the Chesapeake Bay watershed, estimated fluxes of nutrients and sediment from the bay’s nontidal tributaries into the estuary are the foundation of decision making to meet reductions prescribed by the Chesapeake Bay Total Maximum Daily Load (TMDL) and are often the basis for refining scientific understanding of the watershed-scale processes that influence the delivery of these constituents to the bay. Two regression-based flux and trend estimation models, ESTIMATOR and Weighted Regressions on Time, Discharge, and Season (WRTDS), were compared using data from 80 watersheds in the Chesapeake Bay Nontidal Water-Quality Monitoring Network (CBNTN). The watersheds range in size from 62 to 70,189 square kilometers and record lengths range from 6 to 28 years. ESTIMATOR is a constant-parameter model that estimates trends only in concentration; WRTDS uses variable parameters estimated with weighted regression, and estimates trends in both concentration and flux. WRTDS had greater explanatory power than ESTIMATOR, with the greatest degree of improvement evident for records longer than 25 years (30 stations; improvement in median model R2= 0.06 for total nitrogen, 0.08 for total phosphorus, and 0.05 for sediment) and the least degree of improvement for records of less than 10 years, for which the two models performed nearly equally. Flux bias statistics were comparable or lower (more favorable) for WRTDS for any record length; for 30 stations with records longer than 25 years, the greatest degree of improvement was evident for sediment (decrease of 0.17 in median statistic) and total phosphorus (decrease of 0.05). The overall between-station pattern in concentration trend direction and magnitude for all constituents was roughly similar for both models. A detailed case study revealed that trends in concentration estimated by WRTDS can operationally be viewed as a less-constrained equivalent to trends in concentration estimated by ESTIMATOR. Estimates of annual mean flow

  17. A modified Poisson-Boltzmann equation applied to protein adsorption.

    Science.gov (United States)

    Gama, Marlon de Souza; Santos, Mirella Simões; Lima, Eduardo Rocha de Almeida; Tavares, Frederico Wanderley; Barreto, Amaro Gomes Barreto

    2018-01-05

    Ion-exchange chromatography has been widely used as a standard process in purification and analysis of protein, based on the electrostatic interaction between the protein and the stationary phase. Through the years, several approaches are used to improve the thermodynamic description of colloidal particle-surface interaction systems, however there are still a lot of gaps specifically when describing the behavior of protein adsorption. Here, we present an improved methodology for predicting the adsorption equilibrium constant by solving the modified Poisson-Boltzmann (PB) equation in bispherical coordinates. By including dispersion interactions between ions and protein, and between ions and surface, the modified PB equation used can describe the Hofmeister effects. We solve the modified Poisson-Boltzmann equation to calculate the protein-surface potential of mean force, treated as spherical colloid-plate system, as a function of process variables. From the potential of mean force, the Henry constants of adsorption, for different proteins and surfaces, are calculated as a function of pH, salt concentration, salt type, and temperature. The obtained Henry constants are compared with experimental data for several isotherms showing excellent agreement. We have also performed a sensitivity analysis to verify the behavior of different kind of salts and the Hofmeister effects. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model

    Science.gov (United States)

    Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.

    2014-03-01

    Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.

  19. Long-term response of total ozone content at different latitudes of the Northern and Southern Hemispheres caused by solar activity during 1958-2006 (results of regression analysis)

    Science.gov (United States)

    Krivolutsky, Alexei A.; Nazarova, Margarita; Knyazeva, Galina

    Solar activity influences on atmospheric photochemical system via its changebale electromag-netic flux with eleven-year period and also by energetic particles during solar proton event (SPE). Energetic particles penetrate mostly into polar regions and induce additional produc-tion of NOx and HOx chemical compounds, which can destroy ozone in photochemical catalytic cycles. Solar irradiance variations cause in-phase variability of ozone in accordance with photo-chemical theory. However, real ozone response caused by these two factors, which has different physical nature, is not so clear on long-term time scale. In order to understand the situation multiply linear regression statistical method was used. Three data series, which covered the period 1958-2006, have been used to realize such analysis: yearly averaged total ozone at dif-ferent latitudes (World Ozone Data Centre, Canada, WMO); yearly averaged proton fluxes with E¿ 10 MeV ( IMP, GOES, METEOR satellites); yearly averaged numbers of solar spots (Solar Data). Then, before the analysis, the data sets of ozone deviations from the mean values for whole period (1958-2006) at each latitudinal belt were prepared. The results of multiply regression analysis (two factors) revealed rather complicated time-dependent behavior of ozone response with clear negative peaks for the years of strong SPEs. The magnitudes of such peaks on annual mean basis are not greater than 10 DU. The unusual effect -positive response of ozone to solar proton activity near both poles-was discovered by statistical analysis. The pos-sible photochemical nature of found effect is discussed. This work was supported by Russian Science Foundation for Basic Research (grant 09-05-009949) and by the contract 1-6-08 under Russian Sub-Program "Research and Investigation of Antarctica".

  20. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  1. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  2. Regression in organizational leadership.

    Science.gov (United States)

    Kernberg, O F

    1979-02-01

    The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

  3. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  4. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    . There are, however, decreasing returns to aid, and the estimated effectiveness of aid is highly sensitive to the choice of estimator and the set of control variables. When investment and human capital are controlled for, no positive effect of aid is found. Yet, aid continues to impact on growth via...... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes....

  5. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  6. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interfac...... functionals. The software presented here is implemented in the riskRegression package.......In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...

  7. Data Qualification and Data Summary Report: Intact Rock Properties Data on Poisson's Ratio and Young's Modulus

    International Nuclear Information System (INIS)

    Cikanek, E.M.; Safley, L.E.; Grant, T.A.

    2003-01-01

    This report reviews all potentially available Yucca Mountain Project (YMP) data in the Technical Data Management System and compiles all relevant qualified data, including data qualified by this report, on elastic properties, Poisson's ratio and Young's modulus, into a single summary Data Tracking Number (DTN) MO0304DQRIRPPR.002. Since DTN MO0304DQRIRPPR.002 was compiled from both qualified and unqualified sources, this report qualifies the DTN in accordance with AP-SIII.2Q. This report also summarizes the individual test results in MO0304DQRIRPPR.002 and provides summary values using descriptive statistics for Poisson's ratio and Young's modulus in a Reference Information Base Data Item. This report found that test conditions such as temperature, saturation, and sample size could influence test results. The largest influence, however, is the lithologic variation within the tuffs themselves. Even though the summary DTN divided the results by lithostratigrahic units within each formation, there was still substantial variation in elastic properties within individual units. This variation was attributed primarily to the presence or absence of lithophysae, fractures, alteration, pumice fragments, and other lithic clasts within the test specimens as well as changes in porosity within the units. As a secondary cause, substantial variations can also be attributed to test conditions such as the type of test (static or dynamic), size of the test specimen, degree of saturation, temperature, and strain rate conditions. This variation is characteristic of the tuffs and the testing methods, and should be considered when using the data summarized in this report

  8. Filling of a Poisson trap by a population of random intermittent searchers

    KAUST Repository

    Bressloff, Paul C.

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→ in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f n(t). The latter is determined by the integrated Poisson rate μ(t)=0tλ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the

  9. Ridge Regression: A Regression Procedure for Analyzing correlated Independent Variables

    Science.gov (United States)

    Rakow, Ernest A.

    1978-01-01

    Ridge regression is a technique used to ameliorate the problem of highly correlated independent variables in multiple regression analysis. This paper explains the fundamentals of ridge regression and illustrates its use. (JKS)

  10. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  11. Estimation of adjusted rate differences using additive negative binomial regression.

    Science.gov (United States)

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Stability of nonlinear Vlasov-Poisson equilibria through spectral deformation and Fourier-Hermite expansion.

    Science.gov (United States)

    Siminos, Evangelos; Bénisti, Didier; Gremillet, Laurent

    2011-05-01

    We study the stability of spatially periodic, nonlinear Vlasov-Poisson equilibria as an eigenproblem in a Fourier-Hermite basis (in the space and velocity variables, respectively) of finite dimension, N. When the advection term in the Vlasov equation is dominant, the convergence with N of the eigenvalues is rather slow, limiting the applicability of the method. We use the method of spectral deformation introduced by Crawford and Hislop [Ann. Phys. (NY) 189, 265 (1989)] to selectively damp the continuum of neutral modes associated with the advection term, thus accelerating convergence. We validate and benchmark the performance of our method by reproducing the kinetic dispersion relation results for linear (spatially homogeneous) equilibria. Finally, we study the stability of a periodic Bernstein-Greene-Kruskal mode with multiple phase-space vortices, compare our results with numerical simulations of the Vlasov-Poisson system, and show that the initial unstable equilibrium may evolve to different asymptotic states depending on the way it was perturbed. © 2011 American Physical Society

  13. Generating clustered scale-free networks using Poisson based localization of edges

    Science.gov (United States)

    Türker, İlker

    2018-05-01

    We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.

  14. Stress Calculation of a TRISO Coated Particle Fuel by Using a Poisson's Ratio in Creep Condition

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.; Kim, W. K.

    2007-01-01

    KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) project since 2004, has been developing a performance analysis code for the TRISO coated particle fuel named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. KAERI, on the other hand, is developing an ABAQUS based finite element(FE) model to cover the non-linear behaviors of a coated particle fuel such as cracking or debonding of the TRISO coating layers. Using the ABAQUS based FE model, verification calculations were carried out for the IAEA CRP-6 benchmark problems involving creep, swelling, and pressure. However, in this model the Poisson's ratio for elastic solution was used for creep strain calculation. In this study, an improvement is made for the ABAQUS based finite element model by using the Poisson's ratio in creep condition for the calculation of the creep strain rate. As a direct input of the coefficient in a creep condition is impossible, a user subroutine for the ABAQUS solution is prepared in FORTRAN for use in the calculations of the creep strain of the coating layers in the radial and hoop directions of the spherical fuel. This paper shows the calculation results of a TRISO coated particle fuel subject to an irradiation condition assumed as in the Miller's publication in comparison with the results obtained from the old FE model used in the CRP-6 benchmark calculations

  15. Error Covariance Penalized Regression: A novel multivariate model combining penalized regression with multivariate error structure.

    Science.gov (United States)

    Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C

    2018-06-29

    A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  17. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  18. Polynomial Regressions and Nonsense Inference

    Directory of Open Access Journals (Sweden)

    Daniel Ventosa-Santaulària

    2013-11-01

    Full Text Available Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340. by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.

  19. Beatification: Flattening Poisson brackets for plasma theory and computation

    Science.gov (United States)

    Morrison, P. J.; Viscondi, T. F.; Caldas, I.

    2017-10-01

    A perturbative method called beatification is presented for producing nonlinear Hamiltonian fluid and plasma theories. Plasma Hamiltonian theories, fluid and kinetic, are naturally described in terms of noncanonical variables. The beatification procedure amounts to finding a transformation that removes the explicit variable dependence from a noncanonical Poisson bracket and replaces it with a fixed dependence on a chosen state in the phase space. As such, beatification is a major step toward casting the Hamiltonian system in its canonical form, thus enabling or facilitating the use of analytical and numerical techniques that require or favor a representation in terms of canonical, or beatified, Hamiltonian variables. Examples will be given. U.S. D.O.E No. #DE-FG02-04ER-54742.

  20. Particular solutions of generalized Euler-Poisson-Darboux equation

    Directory of Open Access Journals (Sweden)

    Rakhila B. Seilkhanova

    2015-01-01

    Full Text Available In this article we consider the generalized Euler-Poisson-Darboux equation $$ {u}_{tt}+\\frac{2\\gamma }{t}{{u}_{t}}={u}_{xx}+{u}_{yy} +\\frac{2\\alpha }{x}{{u}_{x}}+\\frac{2\\beta }{y}{{u}_y},\\quad x>0,\\;y>0,\\;t>0. $$ We construct particular solutions in an explicit form expressed by the Lauricella hypergeometric function of three variables. Properties of each constructed solutions have been investigated in sections of surfaces of the characteristic cone. Precisely, we prove that found solutions have singularity $1/r$ at $r\\to 0$, where ${{r}^2}={{( x-{{x}_0}}^2}+{{( y-{{y}_0}}^2}-{{( t-{{t}_0}}^2}$.

  1. Recent advances in the Poisson/superfish codes

    International Nuclear Information System (INIS)

    Ryne, R.; Barts, T.; Chan, K.C.D.; Cooper, R.; Deaven, H.; Merson, J.; Rodenz, G.

    1992-01-01

    We report on advances in the POISSON/SUPERFISH family of codes used in the design and analysis of magnets and rf cavities. The codes include preprocessors for mesh generation and postprocessors for graphical display of output and calculation of auxiliary quantities. Release 3 became available in January 1992; it contains many code corrections and physics enhancements, and it also includes support for PostScript, DISSPLA, GKS and PLOT10 graphical output. Release 4 will be available in September 1992; it is free of all bit packing, making the codes more portable and able to treat very large numbers of mesh points. Release 4 includes the preprocessor FRONT and a new menu-driven graphical postprocessor that runs on workstations under X-Windows and that is capable of producing arrow plots. We will present examples that illustrate the new capabilities of the codes. (author). 6 refs., 3 figs

  2. Statistical modelling of Poisson/log-normal data

    International Nuclear Information System (INIS)

    Miller, G.

    2007-01-01

    In statistical data fitting, self consistency is checked by examining the closeness of the quantity Χ 2 /NDF to 1, where Χ 2 is the sum of squares of data minus fit divided by standard deviation, and NDF is the number of data minus the number of fit parameters. In order to calculate Χ 2 one needs an expression for the standard deviation. In this note several alternative expressions for the standard deviation of data distributed according to a Poisson/log-normal distribution are proposed and evaluated by Monte Carlo simulation. Two preferred alternatives are identified. The use of replicate data to obtain uncertainty is problematic for a small number of replicates. A method to correct this problem is proposed. The log-normal approximation is good for sufficiently positive data. A modification of the log-normal approximation is proposed, which allows it to be used to test the hypothesis that the true value is zero. (authors)

  3. Cryoconservation du sperme et des embryons de poissons

    OpenAIRE

    Maisse, Gérard; Labbé, Catherine; Ogier de Baulny, Bénédicte; Leveroni Calvi, Sylvia; Haffray, Pierrick

    1998-01-01

    Le développement des programmes de sélection génétique en pisciculture et la protection de la biodiversité de l’ichtyofaune sauvage justifient la création de cryo-banques de sperme et d’embryons de poissons. Les travaux sur la formulation des dilueurs de congélation montrent que l’on doit tenir compte à la fois de l’espèce cible, du type cellulaire concerné et des interactions entre les différents composants du dilueur. L’aptitude à la cryoconservation du sperme est très variable suivant les ...

  4. Bases chimiosensorielles du comportement alimentaire chez les poissons

    OpenAIRE

    Saglio, P.

    1981-01-01

    Le comportement alimentaire, indispensable à la survie de l'individu et donc de l'espèce, occupe à ce titre une position de première importance dans la hiérarchie des comportements fondamentaux qui tous en dépendent très étroitement. Chez les poissons, cette prééminence se trouve illustrée par l'extrême diversité des supports sensoriels impliqués et des expressions comportementales qui leur sont liées. A la suite d'un certain nombre de mises en évidence neurophysiologiques et éthologiques de ...

  5. Radio pulsar glitches as a state-dependent Poisson process

    Science.gov (United States)

    Fulgenzi, W.; Melatos, A.; Hughes, B. D.

    2017-10-01

    Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.

  6. A geometric multigrid Poisson solver for domains containing solid inclusions

    Science.gov (United States)

    Botto, Lorenzo

    2013-03-01

    A Cartesian grid method for the fast solution of the Poisson equation in three-dimensional domains with embedded solid inclusions is presented and its performance analyzed. The efficiency of the method, which assume Neumann conditions at the immersed boundaries, is comparable to that of a multigrid method for regular domains. The method is light in terms of memory usage, and easily adaptable to parallel architectures. Tests with random and ordered arrays of solid inclusions, including spheres and ellipsoids, demonstrate smooth convergence of the residual for small separation between the inclusion surfaces. This feature is important, for instance, in simulations of nearly-touching finite-size particles. The implementation of the method, “MG-Inc”, is available online. Catalogue identifier: AEOE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19068 No. of bytes in distributed program, including test data, etc.: 215118 Distribution format: tar.gz Programming language: C++ (fully tested with GNU GCC compiler). Computer: Any machine supporting standard C++ compiler. Operating system: Any OS supporting standard C++ compiler. RAM: About 150MB for 1283 resolution Classification: 4.3. Nature of problem: Poisson equation in domains containing inclusions; Neumann boundary conditions at immersed boundaries. Solution method: Geometric multigrid with finite-volume discretization. Restrictions: Stair-case representation of the immersed boundaries. Running time: Typically a fraction of a minute for 1283 resolution.

  7. On the Fractional Poisson Process and the Discretized Stable Subordinator

    Directory of Open Access Journals (Sweden)

    Rudolf Gorenflo

    2015-08-01

    Full Text Available We consider the renewal counting number process N = N(t as a forward march over the non-negative integers with independent identically distributed waiting times. We embed the values of the counting numbers N in a “pseudo-spatial” non-negative half-line x ≥ 0 and observe that for physical time likewise we have t ≥ 0. Thus we apply the Laplace transform with respect to both variables x and t. Applying then a modification of the Montroll-Weiss-Cox formalism of continuous time random walk we obtain the essential characteristics of a renewal process in the transform domain and, if we are lucky, also in the physical domain. The process t = t(N of accumulation of waiting times is inverse to the counting number process, in honour of the Danish mathematician and telecommunication engineer A.K. Erlang we call it the Erlang process. It yields the probability of exactly n renewal events in the interval (0; t]. We apply our Laplace-Laplace formalism to the fractional Poisson process whose waiting times are of Mittag-Leffler type and to a renewal process whose waiting times are of Wright type. The process of Mittag-Leffler type includes as a limiting case the classical Poisson process, the process of Wright type represents the discretized stable subordinator and a re-scaled version of it was used in our method of parametric subordination of time-space fractional diffusion processes. Properly rescaling the counting number process N(t and the Erlang process t(N yields as diffusion limits the inverse stable and the stable subordinator, respectively.

  8. A multiresolution method for solving the Poisson equation using high order regularization

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Walther, Jens Honore

    2016-01-01

    and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...

  9. Probabilistic results for a mobile service scenario

    DEFF Research Database (Denmark)

    Møller, Jesper; Yiu, Man Lung

    from the origin, asks for the location of the first Poisson point and keeps asking for the location of the next Poisson point until the first time that he can be completely certain that he knows which Poisson point is his nearest neighbour. This waiting time is the communication cost, while...... are established for any dimension d ≥ 1. Furthermore, special results when d = 1 and particularly when d = 2 are derived....

  10. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  11. A Combined MPI-CUDA Parallel Solution of Linear and Nonlinear Poisson-Boltzmann Equation

    Directory of Open Access Journals (Sweden)

    José Colmenares

    2014-01-01

    Full Text Available The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs.

  12. The Allan variance in the presence of a compound Poisson process modelling clock frequency jumps

    Science.gov (United States)

    Formichella, Valerio

    2016-12-01

    Atomic clocks can be affected by frequency jumps occurring at random times and with a random amplitude. The frequency jumps degrade the clock stability and this is captured by the Allan variance. In this work we assume that the random jumps can be modelled by a compound Poisson process, independent of the other stochastic and deterministic processes affecting the clock stability. Then, we derive the analytical expression of the Allan variance of a jumping clock. We find that the analytical Allan variance does not depend on the actual shape of the jumps amplitude distribution, but only on its first and second moments, and its final form is the same as for a clock with a random walk of frequency and a frequency drift. We conclude that the Allan variance cannot distinguish between a compound Poisson process and a Wiener process, hence it may not be sufficient to correctly identify the fundamental noise processes affecting a clock. The result is general and applicable to any oscillator, whose frequency is affected by a jump process with the described statistics.

  13. Optimal inversion of the Anscombe transformation in low-count Poisson image denoising.

    Science.gov (United States)

    Mäkitalo, Markku; Foi, Alessandro

    2011-01-01

    The removal of Poisson noise is often performed through the following three-step procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few state-of-the-art denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the low-count regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.

  14. Exploring a charge-central strategy in the solution of Poisson's equation for biomolecular applications.

    Science.gov (United States)

    Liu, Xingping; Wang, Changhao; Wang, Jun; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2013-01-07

    Continuum solvent treatments based on the Poisson-Boltzmann equation have been widely accepted for energetic analysis of biomolecular systems. In these approaches, the molecular solute is treated as a low dielectric region and the solvent is treated as a high dielectric continuum. The existence of a sharp dielectric jump at the solute-solvent interface poses a challenge to model the solvation energetics accurately with such a simple mathematical model. In this study, we explored and evaluated a strategy based on the "induced surface charge" to eliminate the dielectric jump within the finite-difference discretization scheme. In addition to the use of the induced surface charges in solving the equation, the second-order accurate immersed interface method is also incorporated to discretize the equation. The resultant linear system is solved with the GMRES algorithm to explicitly impose the flux conservation condition across the solvent-solute interface. The new strategy was evaluated on both analytical and realistic biomolecular systems. The numerical tests demonstrate the feasibility of utilizing induced surface charge in the finite-difference solution of the Poisson-Boltzmann equation. The analysis data further show that the strategy is consistent with theory and the classical finite-difference method on the tested systems. Limitations of the current implementations and further improvements are also analyzed and discussed to fully bring out its potential of achieving higher numerical accuracy.

  15. The Lie–Poisson structure of the reduced n-body problem

    International Nuclear Information System (INIS)

    Dullin, Holger R

    2013-01-01

    The classical n-body problem in d-dimensional space is invariant under the Galilean symmetry group. We reduce by this symmetry group using the method of polynomial invariants. One novelty of our approach is that we do not fix the centre of mass but rather use a momentum shifting trick to change the kinetic part of the Hamiltonian to arrive at a new, dynamically equivalent Hamiltonian which is easier to reduce. As a result we obtain a reduced system with a Lie–Poisson structure which is isomorphic to sp(2n-2), independently of d. The reduction preserves the natural form of the Hamiltonian as a sum of kinetic energy that depends on velocities only and a potential that depends on positions only. This splitting allows us to construct a Poisson integrator for the reduced n-body problem which is efficient away from collisions for n = 3. In particular, we could integrate the figure eight orbit in 18 time steps. (paper)

  16. Lecture notes on ridge regression

    OpenAIRE

    van Wieringen, Wessel N.

    2015-01-01

    The linear regression model cannot be fitted to high-dimensional data, as the high-dimensionality brings about empirical non-identifiability. Penalized regression overcomes this non-identifiability by augmentation of the loss function by a penalty (i.e. a function of regression coefficients). The ridge penalty is the sum of squared regression coefficients, giving rise to ridge regression. Here many aspect of ridge regression are reviewed e.g. moments, mean squared error, its equivalence to co...

  17. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Directory of Open Access Journals (Sweden)

    Matt N. Williams

    2013-09-01

    Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.

  18. Iterative observer based method for source localization problem for Poisson equation in 3D

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-10

    A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data estimation problems for Laplace equation over the 3D domain. The solution of each of these boundary estimation problems involves writing down the mathematical problem in state-space-like representation using one of the space variables as time-like. First, system observability result for 3D boundary estimation problem is recalled in an infinite dimensional setting. Then, based on the observability result, the boundary estimation problem is decomposed into a set of independent 2D sub-problems. These 2D problems are then solved using an iterative observer to obtain the solution. Theoretical results are provided. The method is implemented numerically using finite difference discretization schemes. Numerical illustrations along with simulation results are provided.

  19. Non-Zero Mean PDF Solution of Nonlinear Oscillators Due to Poisson White Noise

    Science.gov (United States)

    Er, G. K.; Iu, V. P.; Zhu, H. T.; Kou, K. P.

    2010-05-01

    This paper presents a solution procedure for the PDF solution of the response of nonlinear oscillators under Poisson white noise. The exponential-polynomial closure (EPC) method is employed to fulfill this task. A van der Pol oscillator and a Duffing oscillator are further investigated in the case of nonzero mean response, respectively. When the polynomial order n increases to 6, the result of the EPC method is in good agreement with the simulation, particularly in the tail region of the PDF. The analysis shows that the non-zero mean PDF is not symmetrically distributed about its mean unlike the case of the zero-mean PDF. The numerical analysis also shows that the result obtained with the EPC method (n = 2) is same as that from equivalent linearization method with which the result differs significantly from the simulation result.

  20. Ridge regression processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    Current navigation requirements depend on a geometric dilution of precision (GDOP) criterion. As long as the GDOP stays below a specific value, navigation requirements are met. The GDOP will exceed the specified value when the measurement geometry becomes too collinear. A new signal processing technique, called Ridge Regression Processing, can reduce the effects of nearly collinear measurement geometry; thereby reducing the inflation of the measurement errors. It is shown that the Ridge signal processor gives a consistently better mean squared error (MSE) in position than the Ordinary Least Mean Squares (OLS) estimator. The applicability of this technique is currently being investigated to improve the following areas: receiver autonomous integrity monitoring (RAIM), coverage requirements, availability requirements, and precision approaches.

  1. AUTISTIC EPILEPTIFORM REGRESSION (A REVIEW

    Directory of Open Access Journals (Sweden)

    L. Yu. Glukhova

    2012-01-01

    Full Text Available The author represents the review of current scientific literature devoted to autistic epileptiform regression — the special form of autistic disorder, characterized by development of severe communicative disorders in children as a result of continuous prolonged epileptiform activity on EEG. This condition has been described by R.F. Tuchman and I. Rapin in 1997. The author describes the aspects of pathogenesis, clinical pictures and diagnostics of this disorder, including the peculiar anomalies on EEG (benign epileptiform patterns of childhood, with a high index of epileptiform activity, especially in the sleep. The especial attention is given to approaches to the treatment of autistic epileptiform regression. Efficacy of valproates, corticosteroid hormones and antiepileptic drugs of other groups is considered.

  2. Calculating a Stepwise Ridge Regression.

    Science.gov (United States)

    Morris, John D.

    1986-01-01

    Although methods for using ordinary least squares regression computer programs to calculate a ridge regression are available, the calculation of a stepwise ridge regression requires a special purpose algorithm and computer program. The correct stepwise ridge regression procedure is given, and a parallel FORTRAN computer program is described.…

  3. An analysis of the fluctuation potential in the modified Poisson-Boltzmann theory for restricted primitive model electrolytes

    Directory of Open Access Journals (Sweden)

    E.O. Ulloa-Dávila

    2017-12-01

    Full Text Available An approximate analytical solution to the fluctuation potential problem in the modified Poisson-Boltzmann theory of electrolyte solutions in the restricted primitive model is presented. The solution is valid for all inter-ionic distances, including contact values. The fluctuation potential solution is implemented in the theory to describe the structure of the electrolyte in terms of the radial distribution functions, and to calculate some aspects of thermodynamics, viz., configurational reduced energies, and osmotic coefficients. The calculations have been made for symmetric valence 1:1 systems at the physical parameters of ionic diameter 4.25·10^{-10} m, relative permittivity 78.5, absolute temperature 298 K, and molar concentrations 0.1038, 0.425, 1.00, and 1.968. Radial distribution functions are compared with the corresponding results from the symmetric Poisson-Boltzmann, and the conventional and modified Poisson-Boltzmann theories. Comparisons have also been done for the contact values of the radial distributions, reduced configurational energies, and osmotic coefficients as functions of electrolyte concentration. Some Monte Carlo simulation data from the literature are also included in the assessment of the thermodynamic predictions. Results show a very good agreement with the Monte Carlo results and some improvement for osmotic coefficients and radial distribution functions contact values relative to these theories. The reduced energy curve shows excellent agreement with Monte Carlo data for molarities up to 1 mol/dm^3.

  4. Eulerian method for computing multivalued solutions of the Euler-Poisson equations and applications to wave breaking in klystrons

    International Nuclear Information System (INIS)

    Li Xiantao; Woehlbier, John G.; Booske, John H.; Jin Shi

    2004-01-01

    We provide methods of computing multivalued solutions to the Euler-Poisson system and test them in the context of a klystron amplifier. An Eulerian formulation capable of computing multivalued solutions is derived from a kinetic description of the Euler-Poisson system and a moment closure. The system of the moment equations may be closed due to the special structure of the solution in phase space. The Eulerian moment equations are computed for a velocity modulated electron beam, which has been shown by prior Lagrangian theories to break in a finite time and form multivalued solutions. The results of the Eulerian moment equations are compared to direct computation of the kinetic equations and a Lagrangian method also developed in the paper. We use the Lagrangian formulation for the explicit computation of wave breaking time and location for typical velocity modulation boundary conditions

  5. Geometric discretization of the multidimensional Dirac delta distribution - Application to the Poisson equation with singular source terms

    Science.gov (United States)

    Egan, Raphael; Gibou, Frédéric

    2017-10-01

    We present a discretization method for the multidimensional Dirac distribution. We show its applicability in the context of integration problems, and for discretizing Dirac-distributed source terms in Poisson equations with constant or variable diffusion coefficients. The discretization is cell-based and can thus be applied in a straightforward fashion to Quadtree/Octree grids. The method produces second-order accurate results for integration. Superlinear convergence is observed when it is used to model Dirac-distributed source terms in Poisson equations: the observed order of convergence is 2 or slightly smaller. The method is consistent with the discretization of Dirac delta distribution for codimension one surfaces presented in [1,2]. We present Quadtree/Octree construction procedures to preserve convergence and present various numerical examples, including multi-scale problems that are intractable with uniform grids.

  6. Symplectic and Poisson Geometry in Interaction with Analysis, Algebra and Topology & Symplectic Geometry, Noncommutative Geometry and Physics

    CERN Document Server

    Eliashberg, Yakov; Maeda, Yoshiaki; Symplectic, Poisson, and Noncommutative geometry

    2014-01-01

    Symplectic geometry originated in physics, but it has flourished as an independent subject in mathematics, together with its offspring, symplectic topology. Symplectic methods have even been applied back to mathematical physics. Noncommutative geometry has developed an alternative mathematical quantization scheme based on a geometric approach to operator algebras. Deformation quantization, a blend of symplectic methods and noncommutative geometry, approaches quantum mechanics from a more algebraic viewpoint, as it addresses quantization as a deformation of Poisson structures. This volume contains seven chapters based on lectures given by invited speakers at two May 2010 workshops held at the Mathematical Sciences Research Institute: Symplectic and Poisson Geometry in Interaction with Analysis, Algebra and Topology (honoring Alan Weinstein, one of the key figures in the field) and Symplectic Geometry, Noncommutative Geometry and Physics. The chapters include presentations of previously unpublished results and ...

  7. Global Analysis of Response in the Piezomagnetoelastic Energy Harvester System under Harmonic and Poisson White Noise Excitations

    International Nuclear Information System (INIS)

    Yue Xiao-Le; Xu Wei; Zhang Ying; Wang Liang

    2015-01-01

    The piezomagnetoelastic energy harvester system subjected to harmonic and Poisson white noise excitations is studied by using the generalized cell mapping method. The transient and stationary probability density functions (PDFs) of response based on the global viewpoint are obtained by the matrix analysis method. Monte Carlo simulation results verify the accuracy of this method. It can be observed that evolutionary direction of transient and stationary PDFs is in accordance with the unstable manifold for this system, and a stochastic P-bifurcation occurs as the intensity of Poisson white noise increases. This study presents an efficient numerical tool to solve the stochastic response of a three-dimensional dynamical system and provides a new idea to analyze the energy harvester system. (paper)

  8. Global Analysis of Response in the Piezomagnetoelastic Energy Harvester System under Harmonic and Poisson White Noise Excitations

    Science.gov (United States)

    Yue, Xiao-Le; Xu, Wei; Zhang, Ying; Wang, Liang

    2015-10-01

    The piezomagnetoelastic energy harvester system subjected to harmonic and Poisson white noise excitations is studied by using the generalized cell mapping method. The transient and stationary probability density functions (PDFs) of response based on the global viewpoint are obtained by the matrix analysis method. Monte Carlo simulation results verify the accuracy of this method. It can be observed that evolutionary direction of transient and stationary PDFs is in accordance with the unstable manifold for this system, and a stochastic P-bifurcation occurs as the intensity of Poisson white noise increases. This study presents an efficient numerical tool to solve the stochastic response of a three-dimensional dynamical system and provides a new idea to analyze the energy harvester system. Supported by the National Natural Science Foundation of China under Grant Nos. 11302170, 11202160, 11302171, and the Fundamental Research Funds for the Central Universities under Grant No. 3102014JCQ01079

  9. Evaluation of Tensile Young's Modulus and Poisson's Ratio of a Bi-modular Rock from the Displacement Measurements in a Brazilian Test

    Science.gov (United States)

    Patel, Shantanu; Martin, C. Derek

    2018-02-01

    Unlike metals, rocks show bi-modularity (different Young's moduli and Poisson's ratios in compression and tension). Displacements monitored during the Brazilian test are used in this study to obtain the Young's modulus and Poisson's ratio in tension. New equations for the displacements in a Brazilian test are derived considering the bi-modularity in the stress-strain relations. The digital image correlation technique was used to monitor the displacements of the Brazilian disk flat surface. To validate the Young's modulus and Poisson's ratio obtained from the Brazilian test, the results were compared with the values from the direct tension tests. The results obtained from the Brazilian test were repetitive and within 3.5% of the value obtained from the direct tension test for the rock tested.

  10. Multinomial logistic regression ensembles.

    Science.gov (United States)

    Lee, Kyewon; Ahn, Hongshik; Moon, Hojin; Kodell, Ralph L; Chen, James J

    2013-05-01

    This article proposes a method for multiclass classification problems using ensembles of multinomial logistic regression models. A multinomial logit model is used as a base classifier in ensembles from random partitions of predictors. The multinomial logit model can be applied to each mutually exclusive subset of the feature space without variable selection. By combining multiple models the proposed method can handle a huge database without a constraint needed for analyzing high-dimensional data, and the random partition can improve the prediction accuracy by reducing the correlation among base classifiers. The proposed method is implemented using R, and the performance including overall prediction accuracy, sensitivity, and specificity for each category is evaluated on two real data sets and simulation data sets. To investigate the quality of prediction in terms of sensitivity and specificity, the area under the receiver operating characteristic (ROC) curve (AUC) is also examined. The performance of the proposed model is compared to a single multinomial logit model and it shows a substantial improvement in overall prediction accuracy. The proposed method is also compared with other classification methods such as the random forest, support vector machines, and random multinomial logit model.

  11. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. The Rasch Poisson counts model for incomplete data : An application of the EM algorithm

    NARCIS (Netherlands)

    Jansen, G.G.H.

    Rasch's Poisson counts model is a latent trait model for the situation in which K tests are administered to N examinees and the test score is a count [e.g., the repeated occurrence of some event, such as the number of items completed or the number of items answered (in)correctly]. The Rasch Poisson

  13. Modeling Repeated Count Data : Some Extensions of the Rasch Poisson Counts Model

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Jansen, Margo

    1995-01-01

    We consider data that can be summarized as an N X K table of counts-for example, test data obtained by administering K tests to N subjects. The cell entries y(ij) are assumed to be conditionally independent Poisson-distributed random variables, given the NK Poisson intensity parameters mu(ij). The

  14. Dynamic Response of Non-Linear Inelsatic Systems to Poisson-Driven Stochastic Excitations

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Iwankiewicz, R.

    A single-degree-of-freedom inelastic system subject to a stochastic excitation in form of a Poisson-distributed train of impulses is considered. The state variables of the system form a non-diffusive, Poisson-driven Markov process. Two approximate analytical techniques are developed: modification...

  15. A relation between Liapunov stability, non-wanderingness and Poisson stability

    International Nuclear Information System (INIS)

    Ahmad, K.H.

    1985-07-01

    In this work, some of the relations among Liapunov stability, non-wanderingness and Poisson stability are considered. In particular it is shown that for a non-wandering point in a set, positive (resp. negative) Liapunov stability in that set implies positive (resp. negative) Poisson stability in the same set. (author)

  16. Approximation by some combinations of Poisson integrals for Hermite and Laguerre expansions

    Directory of Open Access Journals (Sweden)

    Grażyna Krech

    2013-02-01

    Full Text Available The aim of this paper is the study of a rate of convergence of some combinations of Poisson integrals for Hermite and Laguerre expansions. We are able to achieve faster convergence for our modified operators over the Poisson integrals. We prove also the Voronovskaya type theorem for these new operators.

  17. Spontaneous Regression of Lumbar Herniated Disc

    Directory of Open Access Journals (Sweden)

    Chun-Wei Chang

    2009-12-01

    Full Text Available Intervertebral disc herniation of the lumbar spine is a common disease presenting with low back pain and involving nerve root radiculopathy. Some neurological symptoms in the majority of patients frequently improve after a period of conservative treatment. This has been regarded as the result of a decrease of pressure exerted from the herniated disc on neighboring neurostructures and a gradual regression of inflammation. Recently, with advances in magnetic resonance imaging, many reports have demonstrated that the herniated disc has the potential for spontaneous regression. Regression coincided with the improvement of associated symptoms. However, the exact regression mechanism remains unclear. Here, we present 2 cases of lumbar intervertebral disc herniation with spontaneous regression. We review the literature and discuss the possible mechanisms, the precipitating factors of spontaneous disc regression and the proper timing of surgical intervention.

  18. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  19. Solving the Poisson partial differential equation using vector space projection methods

    Science.gov (United States)

    Marendic, Boris

    This research presents a new approach at solving the Poisson partial differential equation using Vector Space Projection (VSP) methods. The work attacks the Poisson equation as encountered in two-dimensional phase unwrapping problems, and in two-dimensional electrostatic problems. Algorithms are developed by first considering simple one-dimensional cases, and then extending them to two-dimensional problems. In the context of phase unwrapping of two-dimensional phase functions, we explore an approach to the unwrapping using a robust extrapolation-projection algorithm. The unwrapping is done iteratively by modification of the Gerchberg-Papoulis (GP) extrapolation algorithm, and the solution is refined by projecting onto the available global data. An important contribution to the extrapolation algorithm is the formulation of the algorithm with the relaxed bandwidth constraint, and the proof that such modified GP extrapolation algorithm still converges. It is also shown that the unwrapping problem is ill-posed in the VSP setting, and that the modified GP algorithm is the missing link to pushing the iterative algorithm out of the trap solution under certain conditions. Robustness of the algorithm is demonstrated through its performance in a noisy environment. Performance is demonstrated by applying it to phantom phase functions, as well as to the real phase functions. Results are compared to well known algorithms in literature. Unlike many existing unwrapping methods which perform unwrapping locally, this work approaches the unwrapping problem from a globally, and eliminates the need for guiding instruments, like quality maps. VSP algorithm also very effectively battles problems of shadowing and holes, where data is not available or is heavily corrupted. In solving the classical Poisson problems in electrostatics, we demonstrate the effectiveness and ease of implementation of the VSP methodology to solving the equation, as well as imposing of the boundary conditions

  20. Poisson's ratio and Young's modulus of lipid bilayers in different phases

    Directory of Open Access Journals (Sweden)

    Tayebeh eJadidi

    2014-04-01

    Full Text Available A general computational method is introduced to estimate the Poisson's ratio for membranes with small thickness.In this method, the Poisson's ratio is calculated by utilizing a rescaling of inter-particle distancesin one lateral direction under periodic boundary conditions. As an example for the coarse grained lipid model introduced by Lenz and Schmid, we calculate the Poisson's ratio in the gel, fluid, and interdigitated phases. Having the Poisson's ratio, enable us to obtain the Young's modulus for the membranes in different phases. The approach may be applied to other membranes such as graphene and tethered membranes in orderto predict the temperature dependence of its Poisson's ratio and Young's modulus.

  1. The Lie-Poisson structure of integrable classical non-linear sigma models

    International Nuclear Information System (INIS)

    Bordemann, M.; Forger, M.; Schaeper, U.; Laartz, J.

    1993-01-01

    The canonical structure of classical non-linear sigma models on Riemannian symmetric spaces, which constitute the most general class of classical non-linear sigma models known to be integrable, is shown to be governed by a fundamental Poisson bracket relation that fits into the r-s-matrix formalism for non-ultralocal integrable models first discussed by Maillet. The matrices r and s are computed explicitly and, being field dependent, satisfy fundamental Poisson bracket relations of their own, which can be expressed in terms of a new numerical matrix c. It is proposed that all these Poisson brackets taken together are representation conditions for a new kind of algebra which, for this class of models, replaces the classical Yang-Baxter algebra governing the canonical structure of ultralocal models. The Poisson brackets for the transition matrices are also computed, and the notorious regularization problem associated with the definition of the Poisson brackets for the monodromy matrices is discussed. (orig.)

  2. Comment on 'On higher order corrections to gyrokinetic Vlasov-Poisson equations in the long wavelength limit' [Phys. Plasmas 16, 044506 (2009)

    International Nuclear Information System (INIS)

    Parra, Felix I.; Catto, Peter J.

    2009-01-01

    A recent publication [F. I. Parra and P. J. Catto, Plasma Phys. Controlled Fusion 50, 065014 (2008)] warned against the use of the lower order gyrokinetic Poisson equation at long wavelengths because the long wavelength, radial electric field must remain undetermined to the order the equation is obtained. Another reference [W. W. Lee and R. A. Kolesnikov, Phys. Plasmas 16, 044506 (2009)] criticizes these results by arguing that the higher order terms neglected in the most common gyrokinetic Poisson equation are formally smaller than the terms that are retained. This argument is flawed and ignores that the lower order terms, although formally larger, must cancel without determining the long wavelength, radial electric field. The reason for this cancellation is discussed. In addition, the origin of a nonlinear term present in the gyrokinetic Poisson equation [F. I. Parra and P. J. Catto, Plasma Phys. Controlled Fusion 50, 065014 (2008)] is explained.

  3. A Poisson Cluster Stochastic Rainfall Generator That Accounts for the Interannual Variability of Rainfall Statistics: Validation at Various Geographic Locations across the United States

    Directory of Open Access Journals (Sweden)

    Dongkyun Kim

    2014-01-01

    Full Text Available A novel approach for a Poisson cluster stochastic rainfall generator was validated in its ability to reproduce important rainfall and watershed response characteristics at 104 locations in the United States. The suggested novel approach, The Hybrid Model (THM, as compared to the traditional Poisson cluster rainfall modeling approaches, has an additional capability to account for the interannual variability of rainfall statistics. THM and a traditional approach of Poisson cluster rainfall model (modified Bartlett-Lewis rectangular pulse model were compared in their ability to reproduce the characteristics of extreme rainfall and watershed response variables such as runoff and peak flow. The results of the comparison indicate that THM generally outperforms the traditional approach in reproducing the distributions of peak rainfall, peak flow, and runoff volume. In addition, THM significantly outperformed the traditional approach in reproducing extreme rainfall by 2.3% to 66% and extreme flow values by 32% to 71%.

  4. Comparison among Wavelet filters and others in the frequency domain for reducing Poisson noise in head CT

    International Nuclear Information System (INIS)

    Perez Diaz, M.; Ruiz Gonzalez, Y.; Lorenzo Ginori, J. V.

    2015-01-01

    This paper describes a comparison among some wavelet filters and other most traditional filters in the frequency domain like Median, Wiener and Butter worth to reduce Poisson noise in Computed Tomography (CT) scans. Five slices of CT containing the posterior fossa from an anthropomorphic phantom and from patients were selected. As their original projections contain noise from the acquisition process, some simulated noise-free lesions were added on the images. After that, the whole images were artificially contaminated with Poisson noise over the sinogram-space. The configurations using wavelets drawn from four wavelet families, using various decomposition levels, and different thresholds, were tested in order to determine de-noising performance as well as the rest of the traditional filters. The quality of the resulting images was evaluated by using Contrast to Noise Ratio (CNR), HVS absolute norm (H1), and Structural Similarity Index (SSIM) as quantitative metrics. We have observed that Wavelet filtering is an alternative to be considered for Poisson noise reduction in image processing of posterior fossa images for head CT with similar behavior to Butter worth and better than Median or Wiener filters for the developed experiment. (Author)

  5. Determination of oral mucosal Poisson's ratio and coefficient of friction from in-vivo contact pressure measurements.

    Science.gov (United States)

    Chen, Junning; Suenaga, Hanako; Hogg, Michael; Li, Wei; Swain, Michael; Li, Qing

    2016-01-01

    Despite their considerable importance to biomechanics, there are no existing methods available to directly measure apparent Poisson's ratio and friction coefficient of oral mucosa. This study aimed to develop an inverse procedure to determine these two biomechanical parameters by utilizing in vivo experiment of contact pressure between partial denture and beneath mucosa through nonlinear finite element (FE) analysis and surrogate response surface (RS) modelling technique. First, the in vivo denture-mucosa contact pressure was measured by a tactile electronic sensing sheet. Second, a 3D FE model was constructed based on the patient CT images. Third, a range of apparent Poisson's ratios and the coefficients of friction from literature was considered as the design variables in a series of FE runs for constructing a RS surrogate model. Finally, the discrepancy between computed in silico and measured in vivo results was minimized to identify the best matching Poisson's ratio and coefficient of friction. The established non-invasive methodology was demonstrated effective to identify such biomechanical parameters of oral mucosa and can be potentially used for determining the biomaterial properties of other soft biological tissues.

  6. A note on asymptotic expansions for sums over a weakly dependent random field with application to the Poisson and Strauss processes

    DEFF Research Database (Denmark)

    Jensen, J.L.

    1993-01-01

    Previous results on Edgeworth expansions for sums over a random field are extended to the case where the strong mixing coefficient depends not only on the distance between two sets of random variables, but also on the size of the two sets. The results are applied to the Poisson and the Strauss...

  7. Comparison of three-dimensional Poisson solution methods for particle-based simulation and inhomogeneous dielectrics

    Science.gov (United States)

    Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P.; Eisenberg, Robert S.; Fiegna, Claudio

    2012-07-01

    Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda [J. Chem. Phys.JCPSA60021-960610.1063/1.2212423 125, 034901 (2006)]. The qualocation method is described by J. Tausch [IEEE Transactions on Computer-Aided Design of Integrated Circuits and SystemsITCSDI0278-007010.1109/43.969433 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary

  8. Comparison of three-dimensional poisson solution methods for particle-based simulation and inhomogeneous dielectrics.

    Science.gov (United States)

    Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio

    2012-07-01

    Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the

  9. Zero-truncated panel Poisson mixture models: Estimating the impact on tourism benefits in Fukushima Prefecture.

    Science.gov (United States)

    Narukawa, Masaki; Nohara, Katsuhito

    2018-04-01

    This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    Science.gov (United States)

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  11. Local existence of solutions to the Euler-Poisson system, including densities without compact support

    Science.gov (United States)

    Brauer, Uwe; Karp, Lavi

    2018-01-01

    Local existence and well posedness for a class of solutions for the Euler Poisson system is shown. These solutions have a density ρ which either falls off at infinity or has compact support. The solutions have finite mass, finite energy functional and include the static spherical solutions for γ = 6/5. The result is achieved by using weighted Sobolev spaces of fractional order and a new non-linear estimate which allows to estimate the physical density by the regularised non-linear matter variable. Gamblin also has studied this setting but using very different functional spaces. However we believe that the functional setting we use is more appropriate to describe a physical isolated body and more suitable to study the Newtonian limit.

  12. Stability of periodic steady-state solutions to a non-isentropic Euler-Poisson system

    Science.gov (United States)

    Liu, Cunming; Peng, Yue-Jun

    2017-06-01

    We study the stability of periodic smooth solutions near non-constant steady-states for a non-isentropic Euler-Poisson system without temperature damping term. The system arises in the theory of semiconductors for which the doping profile is a given smooth function. In this stability problem, there are no special restrictions on the size of the doping profile, but only on the size of the perturbation. We prove that small perturbations of periodic steady-states are exponentially stable for large time. For this purpose, we introduce new variables and choose a non-diagonal symmetrizer of the full Euler equations to recover dissipation estimates. This also allows to make the proof of the stability result very simple and concise.

  13. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    Science.gov (United States)

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  14. General form of the Euler-Poisson-Darboux equation and application of the transmutation method

    Directory of Open Access Journals (Sweden)

    Elina L. Shishkina

    2017-07-01

    Full Text Available In this article, we find solution representations in the compact integral form to the Cauchy problem for a general form of the Euler-Poisson-Darboux equation with Bessel operators via generalized translation and spherical mean operators for all values of the parameter k, including also not studying before exceptional odd negative values. We use a Hankel transform method to prove results in a unified way. Under additional conditions we prove that a distributional solution is a classical one too. A transmutation property for connected generalized spherical mean is proved and importance of applying transmutation methods for differential equations with Bessel operators is emphasized. The paper also contains a short historical introduction on differential equations with Bessel operators and a rather detailed reference list of monographs and papers on mathematical theory and applications of this class of differential equations.

  15. Site-Specific Study of In-Building Wireless Solutions with Poisson Traffic

    DEFF Research Database (Denmark)

    Liu, Zhen; Sørensen, Troels Bundgaard; Mogensen, Preben

    2011-01-01

    traffic model with fixed buffer size and Poisson arrival. Our new results show better performance for Femto cells with frequency reuse 1 at light to medium load, although the intelligent distributed system still obtains considerable better cell edge user throughput for the same number of access points....... system - together with another multi-cell system using our proposed centralized scheduling scheme. In our previous work, their performance is evaluated and compared in the LTE downlink context with full buffer traffic. Compared to real mobile networks, the full buffer traffic model is usually a worst......-case estimation of traffic load which causes severe interference conditions. Especially for Femto cells with universal frequency reuse it degrades system performance and may lead to biased conclusions on the relative performance of the different in-building solutions. In this study, we use a more realistic...

  16. Non-Poisson counting statistics of a hybrid G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Jae, Moosung; Gardner, Robin P.

    2007-01-01

    The counting statistics of a G-M counter with a considerable dead time event rate deviates from Poisson statistics. Important characteristics such as observed counting rates as a function true counting rates, variances and interval distributions were analyzed for three dead time models, non-paralyzable, paralyzable and hybrid, with the help of GMSIM, a Monte Carlo dead time effect simulator. The simulation results showed good agreements with the models in observed counting rates and variances. It was found through GMSIM simulations that the interval distribution for the hybrid model showed three distinctive regions, a complete cutoff region for the duration of the total dead time, a degraded exponential and an enhanced exponential regions. By measuring the cutoff and the duration of degraded exponential from the pulse interval distribution, it is possible to evaluate the two dead times in the hybrid model

  17. An implicit meshless scheme for the solution of transient non-linear Poisson-type equations

    KAUST Repository

    Bourantas, Georgios

    2013-07-01

    A meshfree point collocation method is used for the numerical simulation of both transient and steady state non-linear Poisson-type partial differential equations. Particular emphasis is placed on the application of the linearization method with special attention to the lagging of coefficients method and the Newton linearization method. The localized form of the Moving Least Squares (MLS) approximation is employed for the construction of the shape functions, in conjunction with the general framework of the point collocation method. Computations are performed for regular nodal distributions, stressing the positivity conditions that make the resulting system stable and convergent. The accuracy and the stability of the proposed scheme are demonstrated through representative and well-established benchmark problems. © 2013 Elsevier Ltd.

  18. Probabilistic solution of nonlinear oscillators excited by combined Gaussian and Poisson white noises

    Science.gov (United States)

    Zhu, H. T.; Er, G. K.; Iu, V. P.; Kou, K. P.

    2011-06-01

    The stationary probability density function (PDF) solution of the stochastic response of nonlinear oscillators is investigated in this paper. The external excitation is assumed to be a combination of Gaussian and Poisson white noises. The PDF solution is governed by the generalized Kolmogorov equation which is solved by the exponential-polynomial closure (EPC) method. In order to evaluate the effectiveness of the EPC method, different nonlinear oscillators are considered in numerical analysis. Nonlinearity exists either in displacement or in velocity for these nonlinear oscillators. The impulse arrival rate, mono-modal PDF and bi-modal PDF are also considered in this study. Compared to the PDF given by Monte Carlo simulation, the EPC method presents good agreement with the simulated result, which can also be observed in the tail region of the PDF solution.

  19. Monovalent counterion distributions at highly charged water interfaces: Proton-transfer and Poisson-Boltzmann theory

    Energy Technology Data Exchange (ETDEWEB)

    Bu, W.; Vaknin, D.; Travesset, A. (Iowa State)

    2010-07-13

    Surface sensitive synchrotron-x-ray scattering studies reveal the distributions of monovalent ions next to highly charged interfaces. A lipid phosphate (dihexadecyl hydrogen phosphate) was spread as a monolayer at the air-water interface, containing CsI at various concentrations. Using anomalous reflectivity off and at the L{sub 3} Cs{sup +} resonance, we provide spatial counterion distributions (Cs{sup +}) next to the negatively charged interface over a wide range of ionic concentrations. We argue that at low salt concentrations and for pure water the enhanced concentration of hydroniums H{sub 3}O{sup +} at the interface leads to proton transfer back to the phosphate group by a high contact potential, whereas high salt concentrations lower the contact potential resulting in proton release and increased surface charge density. The experimental ionic distributions are in excellent agreement with a renormalized-surface-charge Poisson-Boltzmann theory without fitting parameters or additional assumptions.

  20. Monovalent counterion distributions at highly charged water interfaces: proton-transfer and Poisson-Boltzmann theory.

    Science.gov (United States)

    Bu, Wei; Vaknin, David; Travesset, Alex

    2005-12-01

    Surface sensitive synchrotron-x-ray scattering studies reveal the distributions of monovalent ions next to highly charged interfaces. A lipid phosphate (dihexadecyl hydrogen phosphate) was spread as a monolayer at the air-water interface, containing CsI at various concentrations. Using anomalous reflectivity off and at the L3 Cs+ resonance, we provide spatial counterion distributions (Cs+) next to the negatively charged interface over a wide range of ionic concentrations. We argue that at low salt concentrations and for pure water the enhanced concentration of hydroniums H3O+ at the interface leads to proton transfer back to the phosphate group by a high contact potential, whereas high salt concentrations lower the contact potential resulting in proton release and increased surface charge density. The experimental ionic distributions are in excellent agreement with a renormalized-surface-charge Poisson-Boltzmann theory without fitting parameters or additional assumptions.

  1. Wavelet-Based Poisson Solver for Use in Particle-in-Cell Simulations

    CERN Document Server

    Terzic, Balsa; Mihalcea, Daniel; Pogorelov, Ilya V

    2005-01-01

    We report on a successful implementation of a wavelet-based Poisson solver for use in 3D particle-in-cell simulations. One new aspect of our algorithm is its ability to treat the general (inhomogeneous) Dirichlet boundary conditions. The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modelling of the Fermilab/NICADD and AES/JLab photoinjectors.

  2. Linear response in aging glassy systems, intermittency and the Poisson statistics of record fluctuations

    DEFF Research Database (Denmark)

    Sibani, Paolo

    2007-01-01

    in a correlated fashion and through irreversible bursts, `quakes', which punctuate reversible and equilibrium-like fluctuations of zero average. The temporal distribution of the quakes is a Poisson distribution with an average growing logarithmically on time, indicating that the quakes are triggered by record...... to capture the time dependencies of the EA simulation results. Finally, we argue that whenever the changes of the linear response function and of its conjugate autocorrelation function follow from the same intermittent events a fluctuation-dissipation-like relation can arise between the two in off......We study the intermittent behavior of the energy decay and the linear magnetic response of a glassy system during isothermal aging after a deep thermal quench, using the Edward-Anderson spin glass model as a paradigmatic example. The large intermittent changes in the two observables occur...

  3. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  4. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  5. POISSON project. III. Investigating the evolution of the mass accretion rate

    Science.gov (United States)

    Antoniucci, S.; García López, R.; Nisini, B.; Caratti o Garatti, A.; Giannini, T.; Lorenzetti, D.

    2014-12-01

    Context. As part of the Protostellar Optical-Infrared Spectral Survey On NTT (POISSON) project, we present the results of the analysis of low-resolution near-IR spectroscopic data (0.9-2.4 μm) of two samples of young stellar objects in the Lupus (52 objects) and Serpens (17 objects) star-forming clouds, with masses in the range of 0.1 to 2.0 M⊙ and ages spanning from 105 to a few 107 yr. Aims: After determining the accretion parameters of the targets by analysing their H i near-IR emission features, we added the results from the Lupus and Serpens clouds to those from previous regions (investigated in POISSON with the same methodology) to obtain a final catalogue (143 objects) of mass accretion rate values (Ṁacc) derived in a homogeneous and consistent fashion. Our final goal is to analyse how Ṁacc correlates with the stellar mass (M∗) and how it evolves in time in the whole POISSON sample. Methods: We derived the accretion luminosity (Lacc) and Ṁacc for Lupus and Serpens objects from the Brγ (Paβ in a few cases) line by using relevant empirical relationships available in the literature that connect the H i line luminosity and Lacc. To minimise the biases that arise from adopting literature data that are based on different evolutionary models and also for self-consistency, we re-derived mass and age for each source of the POISSON samples using the same set of evolutionary tracks. Results: We observe a correlation Ṁacc~M*2.2 between mass accretion rate and stellar mass, similarly to what has previously been observed in several star-forming regions. We find that the time variation of Ṁacc is roughly consistent with the expected evolution of the accretion rate in viscous disks, with an asymptotic decay that behaves as t-1.6. However, Ṁacc values are characterised by a large scatter at similar ages and are on average higher than the predictions of viscous models. Conclusions: Although part of the scattering may be related to systematics due to the

  6. Incompressible SPH (ISPH) with fast Poisson solver on a GPU

    Science.gov (United States)

    Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.

    2018-05-01

    This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.

  7. Projections of Temperature-Attributable Premature Deaths in 209 U.S. Cities Using a Cluster-Based Poisson Approach

    Science.gov (United States)

    Schwartz, Joel D.; Lee, Mihye; Kinney, Patrick L.; Yang, Suijia; Mills, David; Sarofim, Marcus C.; Jones, Russell; Streeter, Richard; St. Juliana, Alexis; Peers, Jennifer; hide

    2015-01-01

    Background: A warming climate will affect future temperature-attributable premature deaths. This analysis is the first to project these deaths at a near national scale for the United States using city and month-specific temperature-mortality relationships. Methods: We used Poisson regressions to model temperature-attributable premature mortality as a function of daily average temperature in 209 U.S. cities by month. We used climate data to group cities into clusters and applied an Empirical Bayes adjustment to improve model stability and calculate cluster-based month-specific temperature-mortality functions. Using data from two climate models, we calculated future daily average temperatures in each city under Representative Concentration Pathway 6.0. Holding population constant at 2010 levels, we combined the temperature data and cluster-based temperature-mortality functions to project city-specific temperature-attributable premature deaths for multiple future years which correspond to a single reporting year. Results within the reporting periods are then averaged to account for potential climate variability and reported as a change from a 1990 baseline in the future reporting years of 2030, 2050 and 2100. Results: We found temperature-mortality relationships that vary by location and time of year. In general, the largest mortality response during hotter months (April - September) was in July in cities with cooler average conditions. The largest mortality response during colder months (October-March) was at the beginning (October) and end (March) of the period. Using data from two global climate models, we projected a net increase in premature deaths, aggregated across all 209 cities, in all future periods compared to 1990. However, the magnitude and sign of the change varied by cluster and city. Conclusions: We found increasing future premature deaths across the 209 modeled U.S. cities using two climate model projections, based on constant temperature

  8. Leptospirosis disease mapping with standardized morbidity ratio and Poisson-Gamma model: An analysis of Leptospirosis disease in Kelantan, Malaysia

    Science.gov (United States)

    Che Awang, Aznida; Azah Samat, Nor

    2017-09-01

    Leptospirosis is a disease caused by the infection of pathogenic species from the genus of Leptospira. Human can be infected by the leptospirosis from direct or indirect exposure to the urine of infected animals. The excretion of urine from the animal host that carries pathogenic Leptospira causes the soil or water to be contaminated. Therefore, people can become infected when they are exposed to contaminated soil and water by cut on the skin as well as open wound. It also can enter the human body by mucous membrane such nose, eyes and mouth, for example by splashing contaminated water or urine into the eyes or swallowing contaminated water or food. Currently, there is no vaccine available for the prevention or treatment of leptospirosis disease but this disease can be treated if it is diagnosed early to avoid any complication. The disease risk mapping is important in a way to control and prevention of disease. Using a good choice of statistical model will produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for leptospirosis disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and Poisson-gamma model. This paper begins by providing a review of the SMR method and Poisson-gamma model, which we then applied to leptospirosis data of Kelantan, Malaysia. Both results are displayed and compared using graph, tables and maps. The result shows that the second method Poisson-gamma model produces better relative risk estimates compared to the SMR method. This is because the Poisson-gamma model can overcome the drawback of SMR where the relative risk will become zero when there is no observed leptospirosis case in certain regions. However, the Poisson-gamma model also faced problems where the covariate adjustment for this model is difficult and no possibility for allowing spatial correlation between risks in neighbouring areas. The problems of this model have

  9. Soft network materials with isotropic negative Poisson's ratios over large strains.

    Science.gov (United States)

    Liu, Jianxing; Zhang, Yihui

    2018-01-31

    Auxetic materials with negative Poisson's ratios have important applications across a broad range of engineering areas, such as biomedical devices, aerospace engineering and automotive engineering. A variety of design strategies have been developed to achieve artificial auxetic materials with controllable responses in the Poisson's ratio. The development of designs that can offer isotropic negative Poisson's ratios over large strains can open up new opportunities in emerging biomedical applications, which, however, remains a challenge. Here, we introduce deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1, in an isotropic manner, with a tunable strain range from 0% to ∼90%. The designs rely on a network construction in a periodic lattice topology, which incorporates zigzag microstructures as building blocks to connect lattice nodes. Combined experimental and theoretical studies on broad classes of network topologies illustrate the wide-ranging utility of these concepts. Quantitative mechanics modeling under both infinitesimal and finite deformations allows the development of a rigorous design algorithm that determines the necessary network geometries to yield target Poisson ratios over desired strain ranges. Demonstrative examples in artificial skin with both the negative Poisson's ratio and the nonlinear stress-strain curve precisely matching those of the cat's skin and in unusual cylindrical structures with engineered Poisson effect and shape memory effect suggest potential applications of these network materials.

  10. Identification of temporal patterns in the seismicity of Sumatra using Poisson Hidden Markov models

    Directory of Open Access Journals (Sweden)

    Katerina Orfanogiannaki

    2014-05-01

    Full Text Available On 26 December 2004 and 28 March 2005 two large earthquakes occurred between the Indo-Australian and the southeastern Eurasian plates with moment magnitudes Mw=9.1 and Mw=8.6, respectively. Complete data (mb≥4.2 of the post-1993 time interval have been used to apply Poisson Hidden Markov models (PHMMs for identifying temporal patterns in the time series of the two earthquake sequences. Each time series consists of earthquake counts, in given and constant time units, in the regions determined by the aftershock zones of the two mainshocks. In PHMMs each count is generated by one of m different Poisson processes that are called states. The series of states is unobserved and is in fact a Markov chain. The model incorporates a varying seismicity rate, it assigns a different rate to each state and it detects the changes on the rate over time. In PHMMs unobserved factors, related to the local properties of the region are considered affecting the earthquake occurrence rate. Estimation and interpretation of the unobserved sequence of states that underlie the data contribute to better understanding of the geophysical processes that take place in the region. We applied PHMMs to the time series of the two mainshocks and we estimated the unobserved sequences of states that underlie the data. The results obtained showed that the region of the 26 December 2004 earthquake was in state of low seismicity during almost the entire observation period. On the contrary, in the region of the 28 March 2005 earthquake the seismic activity is attributed to triggered seismicity, due to stress transfer from the region of the 2004 mainshock.

  11. Evaluation of ion binding to DNA duplexes using a size-modified Poisson-Boltzmann theory.

    Science.gov (United States)

    Chu, Vincent B; Bai, Yu; Lipfert, Jan; Herschlag, Daniel; Doniach, Sebastian

    2007-11-01

    Poisson-Boltzmann (PB) theory is among the most widely applied electrostatic theories in biological and chemical science. Despite its reasonable success in explaining a wide variety of phenomena, it fails to incorporate two basic physical effects, ion size and ion-ion correlations, into its theoretical treatment. Recent experimental work has shown significant deviations from PB theory in competitive monovalent and divalent ion binding to a DNA duplex. The experimental data for monovalent binding are consistent with a hypothesis that attributes these deviations to counterion size. To model the observed differences, we have generalized an existing size-modified Poisson-Boltzmann (SMPB) theory and developed a new numerical implementation that solves the generalized theory around complex, atomistic representations of biological molecules. The results of our analysis show that good agreement to data at monovalent ion concentrations up to approximately 150 mM can be attained by adjusting the ion-size parameters in the new size-modified theory. SMPB calculations employing calibrated ion-size parameters predict experimental observations for other nucleic acid structures and salt conditions, demonstrating that the theory is predictive. We are, however, unable to model the observed deviations in the divalent competition data with a theory that only accounts for size but neglects ion-ion correlations, highlighting the need for theoretical descriptions that further incorporate ion-ion correlations. The accompanying numerical solver has been released publicly, providing the general scientific community the ability to compute SMPB solutions around a variety of different biological structures with only modest computational resources.

  12. Random regression models

    African Journals Online (AJOL)

    zlukovi

    The eigenvalues of covariance functions showed that between 10 and 15% of genetic variability was explained by the individual genetic curve of sows in the DS2. This proportion was mainly covered by linear and quadratic coefficients. Results suggest that RRM could be used for genetic analysis of litter size.

  13. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  14. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  15. Review and Recommendations for Zero-inflated Count Regression Modeling of Dental Caries Indices in Epidemiological Studies

    Science.gov (United States)

    Stamm, John W.; Long, D. Leann; Kincade, Megan E.

    2012-01-01

    Over the past five to ten years, zero-inflated count regression models have been increasingly applied to the analysis of dental caries indices (e.g., DMFT, dfms, etc). The main reason for that is linked to the broad decline in children’s caries experience, such that dmf and DMF indices more frequently generate low or even zero counts. This article specifically reviews the application of zero-inflated Poisson and zero-inflated negative binomial regression models to dental caries, with emphasis on the description of the models and the interpretation of fitted model results given the study goals. The review finds that interpretations provided in the published caries research are often imprecise or inadvertently misleading, particularly with respect to failing to discriminate between inference for the class of susceptible persons defined by such models and inference for the sampled population in terms of overall exposure effects. Recommendations are provided to enhance the use as well as the interpretation and reporting of results of count regression models when applied to epidemiological studies of dental caries. PMID:22710271

  16. Control Multivariante Estadístico de Variables Discretas tipo Poisson

    OpenAIRE

    GARCIA BUSTOS, SANDRA LORENA

    2016-01-01

    En algunos casos, cuando el número de defectos de un proceso de producción tiene que ser controlada, la distribución de Poisson se emplea para modelar la frecuencia de estos defectos y para desarrollar un gráfico de control. En este trabajo se analiza el control de características de calidad p> 1 de Poisson . Cuando este control se necesita, hay dos enfoques principales: 1 - Un gráfico para cada variable de Poisson, el esquema múltiple.. 2 -. Sólo una gráfico para todas las variables, el sist...

  17. Hamiltonian field description of the one-dimensional Poisson-Vlasov equations

    International Nuclear Information System (INIS)

    Morrison, P.J.

    1981-07-01

    The one-dimensional Poisson-Vlasov equations are cast into Hamiltonian form. A Poisson Bracket in terms of the phase space density, as sole dynamical variable, is presented. This Poisson bracket is not of the usual form, but possesses the commutator properties of antisymmetry, bilinearity, and nonassociativity by virtue of the Jacobi requirement. Clebsch potentials are seen to yield a conventional (canonical) formulation. This formulation is discretized by expansion in terms of an arbitrary complete set of basis functions. In particular, a wave field representation is obtained

  18. A regularization method for solving the Poisson equation for mixed unbounded-periodic domains

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Mølholm Hejlesen, Mads; Walther, Jens Honoré

    2018-01-01

    the regularized unbounded-periodic Green's functions can be implemented in an FFT-based Poisson solver to obtain a convergence rate corresponding to the regularization order of the Green's function. The high order is achieved without any additional computational cost from the conventional FFT-based Poisson solver...... and enables the calculation of the derivative of the solution to the same high order by direct spectral differentiation. We illustrate an application of the FFT-based Poisson solver by using it with a vortex particle mesh method for the approximation of incompressible flow for a problem with a single periodic...

  19. Numerical methods for realizing nonstationary Poisson processes with piecewise-constant instantaneous-rate functions

    DEFF Research Database (Denmark)

    Harrod, Steven; Kelton, W. David

    2006-01-01

    with piecewise-constant instantaneous rate functions, a capability that has been implemented in commercial simulation software. They test these algorithms in C programs and make comparisons of accuracy, speed, and variability across disparate rate functions and microprocessor architectures. Choice of optimal......Nonstationary Poisson processes are appropriate in many applications, including disease studies, transportation, finance, and social policy. The authors review the risks of ignoring nonstationarity in Poisson processes and demonstrate three algorithms for generation of Poisson processes...... algorithm could not be predicted without knowledge of microprocessor architecture....

  20. The Effect of the Poisson Approximation on the Wartime Assessment and Requirements System Assessment Results.

    Science.gov (United States)

    1983-03-01

    OF ENGI.. UNCLASSIFIED S W WEISS ET AL. MAR B3 AFIT/GST/OS/ 8BM -7 F/G 9/2 NLElhlhlhEEElhE EEEEEEEEEEEEI EEEllEEEEEEEEE mllllEEEEEEEEE ElEEEEEEEllEEE...WRM Materiel 20. AIISTRACj I(Continue on reverse of Ift .csary Vd I lentity by lb I numper:rltv hne purpose o? i td a odtrieterltv effect of using the

  1. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  2. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  3. Bias-corrected quantile regression estimation of censored regression models

    NARCIS (Netherlands)

    Cizek, Pavel; Sadikoglu, Serhan

    2018-01-01

    In this paper, an extension of the indirect inference methodology to semiparametric estimation is explored in the context of censored regression. Motivated by weak small-sample performance of the censored regression quantile estimator proposed by Powell (J Econom 32:143–155, 1986a), two- and

  4. A Hands-on Activity for Teaching the Poisson Distribution Using the Stock Market

    Science.gov (United States)

    Dunlap, Mickey; Studstill, Sharyn

    2014-01-01

    The number of increases a particular stock makes over a fixed period follows a Poisson distribution. This article discusses using this easily-found data as an opportunity to let students become involved in the data collection and analysis process.

  5. Optimized thick-wall cylinders by virtue of Poisson's ratio selection

    International Nuclear Information System (INIS)

    Whitty, J.P.M.; Henderson, B.; Francis, J.; Lloyd, N.

    2011-01-01

    The principal stress distributions in thick-wall cylinders due to variation in the Poisson's ratio are predicted using analytical and finite element methods. Analyses of appropriate brittle and ductile failure criteria show that under the isochoric pressure conditions investigated that auextic (i.e. those possessing a negative Poisson's ratio) materials act as stress concentrators; hence they are predicted to fail before their conventional (i.e. possessing a positive Poisson's ratio) material counterparts. The key finding of the work presented shows that for constrained thick-wall cylinders the maximum tensile principal stress can vanish at a particular Poisson's ratio and aspect ratio. This phenomenon is exploited in order to present an optimized design criterion for thick-wall cylinders. Moreover, via the use of a cogent finite element model, this criterion is also shown to be applicable for the design of micro-porous materials.

  6. On bounds in Poisson approximation for distributions of independent negative-binomial distributed random variables.

    Science.gov (United States)

    Hung, Tran Loc; Giang, Le Truong

    2016-01-01

    Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note.

  7. Equal-Time and Equal-Space Poisson Brackets of the N -Component Coupled NLS Equation

    International Nuclear Information System (INIS)

    Zhou Ru-Guang; Li Pei-Yao; Gao Yuan

    2017-01-01

    Two Poisson brackets for the N-component coupled nonlinear Schrödinger (NLS) equation are derived by using the variantional principle. The first one is called the equal-time Poisson bracket which does not depend on time but only on the space variable. Actually it is just the usual one describing the time evolution of system in the traditional theory of integrable Hamiltonian systems. The second one is equal-space and new. It is shown that the spatial part of Lax pair with respect to the equal-time Poisson bracket and temporal part of Lax pair with respect to the equal-space Poisson bracket share the same r-matrix formulation. These properties are similar to that of the NLS equation. (paper)

  8. Ship-Track Models Based on Poisson-Distributed Port-Departure Times

    National Research Council Canada - National Science Library

    Heitmeyer, Richard

    2006-01-01

    ... of those ships, and their nominal speeds. The probability law assumes that the ship departure times are Poisson-distributed with a time-varying departure rate and that the ship speeds and ship routes are statistically independent...

  9. Remarks on 'Poisson ratio beyond the limits of the elasticity theory'

    International Nuclear Information System (INIS)

    Wojciechowski, K.W.

    2002-12-01

    The non-chiral, elastically isotropic model exhibits Poison ratios in the range -1 ≤ σ ≤ 1 without any molecular rotation. The centres of discs-atoms are replaced in the vertices of a perfect triangle of the side length equal to σ. The positive sign of the Lame constant λ is not necessary for the stability of an isotropic system at any dimensionality. As the upper limit for the Poisson ratio in 2D isotropic systems is 1, crystalline or polycrystalline 2D systems can be obtained having the Poisson ratio exceeding 1/2. Both the traditional theory of elasticity and the Cosserat one exclude Poisson ratios exceeding 1/2 in 3D isotropic systems. Neighter anisotropy nor rotation are necessary to obtain extreme values of the Poisson ratio (author)

  10. Developing an economical and reliable test for measuring the resilient modulus and Poisson's ratio of subgrade.

    Science.gov (United States)

    2010-11-01

    The resilient modulus and Poissons ratio of base and sublayers in highway use are : important parameters in design and quality control process. The currently used techniques : include CBR (California Bearing Ratio) test, resilient modulus test,...

  11. Solution of the Kolmogorov-Nikol'skii problem for the Poisson integrals of continuous functions

    International Nuclear Information System (INIS)

    Stepanets, A I

    2001-01-01

    Asymptotic equalities are obtained for upper bounds of the deviations of Fourier sums in the classes of convolutions of Poisson kernels and continuous functions with moduli of continuity not exceeding fixed majorants

  12. Appearance of eigen modes for the linearized Vlasov-Poisson equation

    International Nuclear Information System (INIS)

    Degond, P.

    1983-01-01

    In order to determine the asymptotic behaviour, when the time goes to infinity, of the solution of the linearized Vlasov-Poisson equation, we use eigen modes, associated to continuous linear functionals on a Banach space of analytic functions [fr

  13. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  14. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  15. Semiclassical limit and well-posedness of nonlinear Schrodinger-Poisson systems

    Directory of Open Access Journals (Sweden)

    Hailiang Li

    2003-09-01

    Full Text Available This paper concerns the well-posedness and semiclassical limit of nonlinear Schrodinger-Poisson systems. We show the local well-posedness and the existence of semiclassical limit of the two models for initial data with Sobolev regularity, before shocks appear in the limit system. We establish the existence of a global solution and show the time-asymptotic behavior of a classical solutions of Schrodinger-Poisson system for a fixed re-scaled Planck constant.

  16. Stability analysis for neutral stochastic differential equation of second order driven by Poisson jumps

    Science.gov (United States)

    Chadha, Alka; Bora, Swaroop Nandan

    2017-11-01

    This paper studies the existence, uniqueness, and exponential stability in mean square for the mild solution of neutral second order stochastic partial differential equations with infinite delay and Poisson jumps. By utilizing the Banach fixed point theorem, first the existence and uniqueness of the mild solution of neutral second order stochastic differential equations is established. Then, the mean square exponential stability for the mild solution of the stochastic system with Poisson jumps is obtained with the help of an established integral inequality.

  17. A spatially-explicit count data regression for modeling the density of forest cockchafer (Melolontha hippocastani larvae in the Hessian Ried (Germany

    Directory of Open Access Journals (Sweden)

    Matthias Schmidt

    2014-10-01

    Full Text Available Background In this paper, a regression model for predicting the spatial distribution of forest cockchafer larvae in the Hessian Ried region (Germany is presented. The forest cockchafer, a native biotic pest, is a major cause of damage in forests in this region particularly during the regeneration phase. The model developed in this study is based on a systematic sample inventory of forest cockchafer larvae by excavation across the Hessian Ried. These forest cockchafer larvae data were characterized by excess zeros and overdispersion. Methods Using specific generalized additive regression models, different discrete distributions, including the Poisson, negative binomial and zero-inflated Poisson distributions, were compared. The methodology employed allowed the simultaneous estimation of non-linear model effects of causal covariates and, to account for spatial autocorrelation, of a 2-dimensional spatial trend function. In the validation of the models, both the Akaike information criterion (AIC and more detailed graphical procedures based on randomized quantile residuals were used. Results The negative binomial distribution was superior to the Poisson and the zero-inflated Poisson distributions, providing a near perfect fit to the data, which was proven in an extensive validation process. The causal predictors found to affect the density of larvae significantly were distance to water table and percentage of pure clay layer in the soil to a depth of 1 m. Model predictions showed that larva density increased with an increase in distance to the water table up to almost 4 m, after which it remained constant, and with a reduction in the percentage of pure clay layer. However this latter correlation was weak and requires further investigation. The 2-dimensional trend function indicated a strong spatial effect, and thus explained by far the highest proportion of variation in larva density. Conclusions As such the model can be used to support forest

  18. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    Science.gov (United States)

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  19. Lumbar herniated disc: spontaneous regression.

    Science.gov (United States)

    Altun, Idiris; Yüksel, Kasım Zafer

    2017-01-01

    Low back pain is a frequent condition that results in substantial disability and causes admission of patients to neurosurgery clinics. To evaluate and present the therapeutic outcomes in lumbar disc hernia (LDH) patients treated by means of a conservative approach, consisting of bed rest and medical therapy. This retrospective cohort was carried out in the neurosurgery departments of hospitals in Kahramanmaraş city and 23 patients diagnosed with LDH at the levels of L3-L4, L4-L5 or L5-S1 were enrolled. The average age was 38.4 ± 8.0 and the chief complaint was low back pain and sciatica radiating to one or both lower extremities. Conservative treatment was administered. Neurological examination findings, durations of treatment and intervals until symptomatic recovery were recorded. Laségue tests and neurosensory examination revealed that mild neurological deficits existed in 16 of our patients. Previously, 5 patients had received physiotherapy and 7 patients had been on medical treatment. The number of patients with LDH at the level of L3-L4, L4-L5, and L5-S1 were 1, 13, and 9, respectively. All patients reported that they had benefit from medical treatment and bed rest, and radiologic improvement was observed simultaneously on MRI scans. The average duration until symptomatic recovery and/or regression of LDH symptoms was 13.6 ± 5.4 months (range: 5-22). It should be kept in mind that lumbar disc hernias could regress with medical treatment and rest without surgery, and there should be an awareness that these patients could recover radiologically. This condition must be taken into account during decision making for surgical intervention in LDH patients devoid of indications for emergent surgery.

  20. Flexible regression models with cubic splines.

    Science.gov (United States)

    Durrleman, S; Simon, R

    1989-05-01

    We describe the use of cubic splines in regression models to represent the relationship between the response variable and a vector of covariates. This simple method can help prevent the problems that result from inappropriate linearity assumptions. We compare restricted cubic spline regression to non-parametric procedures for characterizing the relationship between age and survival in the Stanford Heart Transplant data. We also provide an illustrative example in cancer therapeutics.