WorldWideScience

Sample records for generalized empirical likelihood

  1. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  2. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  3. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  4. Generalized Empirical Likelihood Inference in Semiparametric Regression Model for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Gao Rong LI; Ping TIAN; Liu Gen XUE

    2008-01-01

    In this paper, we consider the semiparametric regression model for longitudinal data. Due to the correlation within groups, a generalized empirical log-likelihood ratio statistic for the unknown parameters in the model is suggested by introducing the working covariance matrix. It is proved that the proposed statistic is asymptotically standard chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. A simulation study is conducted to compare the proposed method with the generalized least squares method in terms of coverage accuracy and average lengths of the confidence intervals.

  5. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  6. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    SUN ShuGuang; ZHANG XinSheng

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  7. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  8. Empirical likelihood inference for diffusion processes with jumps

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.

  9. 纵向数据单指标模型的广义经验似然统计推断%Generalized Empirical Likelihood Inference for Single-index Models

    Institute of Scientific and Technical Information of China (English)

    杨随根; 薛留根

    2015-01-01

    Based on the generalized estimation equations ( GEE ) and the quadratic inference functions ( QIF ) methods, a bias-corrected generalized empirical likelihood was proposed to make statistical inference for the single-index model with longitudinal data. The maximum empirical likelihood estimator and the bias-corrected generalized empirical log-likelihood ratio statistics for the unknown index parameter in the model were obtained. It is proved that the maximum empirical likelihood estimator is asymptotically normal and the proposed statistics are asymptotically chi-square distributed under certain conditions, and hence they can be applied to construct the confidence region of the index parameter.%基于广义估计方程和二次推断函数方法,提出了纠偏的广义经验似然方法对纵向数据单指标模型进行统计推断,获得了模型中指标参数分量的极大经验似然估计和纠偏的广义经验对数似然比统计量。证明了相关估计量在一定条件下具有渐近正态性,且纠偏的广义经验对数似然比统计量依分布收敛于χ2分布,利用所得结果,可以构造未知参数的置信域及相关的假设检验。

  10. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi- tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some mild conditions. When the background driving Lévy process is of type A or B, we show that the intensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  11. Empirical Likelihood Ratio Confidence Interval for Positively Associated Series

    Institute of Scientific and Technical Information of China (English)

    Jun-jian Zhang

    2007-01-01

    Empirical likelihood is discussed by using the blockwise technique for strongly stationary,positively associated random variables.Our results show that the statistics is asymptotically chi-square distributed and the corresponding confidence interval can be constructed.

  12. 纵向数据下部分非线性模型的广义经验似然推断%GENERALIZED EMPIRICAL LIKELIHOOD INFERENCE FOR PARTIALLY NONLINEAR MODELS WITH LONGITUDINAL DATA

    Institute of Scientific and Technical Information of China (English)

    肖燕婷; 孙晓青; 孙瑾

    2016-01-01

    In this paper, we study the construction of confidence region for unknown parameter in partially nonlinear models with longitudinal data. By empirical likelihood method, the generalized empirical log-likelihood ratio for parameter in nonlinear function is proposed and shown to be asymptotically chi-square distribution. At the same time, the maximum empirical likelihood estimator of the parameter in nonlinear function is obtained and asymptotic normality is proved.%本文研究了纵向数据下部分非线性模型中未知参数的置信域的构造。利用经验似然方法,构造了非线性函数中未知参数的广义对数经验似然比统计量,证明了其渐近于卡方分布。同时,得到了未知参数的最大经验似然估计,并证明了其渐近正态性。

  13. Empirical Likelihood and Diagnosis of Generalized Nonlinear Regression under Data Missing%缺失数据下广义非线性回归的经验似然及诊断∗

    Institute of Scientific and Technical Information of China (English)

    牛翔宇; 冯予

    2016-01-01

    研究了数据缺失情况下广义非线性回归模型的统计诊断问题;在响应变量随机缺失的情况下,先利用经验似然方法进行参数估计,得到其渐近置信区间,并通过随机模拟比较出经验似然方法比一般方法求置信区间的优越性;对模型进行影响分析,提出经验似然距离、经验Cook距离以及标准化残差等诊断统计量,最后通过实例验证统计诊断方法的有效性和可行性。%This paper studies the diagnosis problems of generalized nonlinear regression model under data missing, under random missing of response variables, firstly uses empirical likelihood method to conduct parameter estimation, obtains its asymptotic confidence interval, then through random simulation and comparison, gets that empirical likelihood method is more superior than general methods in solving the asymptotic confidence interval, based on the analysis of the impact of the model, proposes the diagnosis statistical data such as empirical likelihood distance, empirical Cook distance, and standardized pseudo⁃residuals and finally uses examples to verify the effectiveness and feasibility of the statistical diagnosis method.

  14. Empirical likelihood for balanced ranked-set sampled data

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Ranked-set sampling(RSS) often provides more efficient inference than simple random sampling(SRS).In this article,we propose a systematic nonparametric technique,RSS-EL,for hypoth-esis testing and interval estimation with balanced RSS data using empirical likelihood(EL).We detail the approach for interval estimation and hypothesis testing in one-sample and two-sample problems and general estimating equations.In all three cases,RSS is shown to provide more efficient inference than SRS of the same size.Moreover,the RSS-EL method does not require any easily violated assumptions needed by existing rank-based nonparametric methods for RSS data,such as perfect ranking,identical ranking scheme in two groups,and location shift between two population distributions.The merit of the RSS-EL method is also demonstrated through simulation studies.

  15. Empirical likelihood-based evaluations of Value at Risk models

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.

  16. A KULLBACK-LEIBLER EMPIRICAL LIKELIHOOD INFERENCE FOR CENSORED DATA

    Institute of Scientific and Technical Information of China (English)

    SHI Jian; Tai-Shing Lau

    2002-01-01

    In this paper, two kinds of Kullback-Leibler criteria with appropriate con straints are proposed to construct empirical likelihood confidence intervals for the mean of right censored data. It is shown that one of the criteria is equivalent to Adimari's (1997)procedure, and the other shares the same asymptotic behavior.

  17. A KULLBACK—LEIBLER EMPIRICAL LIKELIHOOD INFERENCE FOR CENSORED DATA

    Institute of Scientific and Technical Information of China (English)

    SHIJian; Tai-ShingLan

    2002-01-01

    In this paper,two kinds of Kullback-Leibler criteria with appropriate constraints are proposed to construct empirical likelihood confidence intervals for the mean of right censored data.It is shown that one of the criteria is equivalent to Adimari's(1997) procedure,and the other shares the same asymptotic behavior.

  18. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  19. Empirical likelihood method for non-ignorable missing data problems.

    Science.gov (United States)

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  20. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  1. Estimation for Non-Gaussian Locally Stationary Processes with Empirical Likelihood Method

    Directory of Open Access Journals (Sweden)

    Hiroaki Ogata

    2012-01-01

    Full Text Available An application of the empirical likelihood method to non-Gaussian locally stationary processes is presented. Based on the central limit theorem for locally stationary processes, we give the asymptotic distributions of the maximum empirical likelihood estimator and the empirical likelihood ratio statistics, respectively. It is shown that the empirical likelihood method enables us to make inferences on various important indices in a time series analysis. Furthermore, we give a numerical study and investigate a finite sample property.

  2. Empirical likelihood ratio tests for multivariate regression models

    Institute of Scientific and Technical Information of China (English)

    WU Jianhong; ZHU Lixing

    2007-01-01

    This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.

  3. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to inference for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  4. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to infer- ence for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the pa- rameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  5. Using empirical likelihood to combine data: application to food risk assessment.

    Science.gov (United States)

    Crépet, Amélie; Harari-Kermadec, Hugo; Tressou, Jessica

    2009-03-01

    This article introduces an original methodology based on empirical likelihood, which aims at combining different food contamination and consumption surveys to provide risk managers with a risk measure, taking into account all the available information. This risk index is defined as the probability that exposure to a contaminant exceeds a safe dose. It is naturally expressed as a nonlinear functional of the different consumption and contamination distributions, more precisely as a generalized U-statistic. This nonlinearity and the huge size of the data sets make direct computation of the problem unfeasible. Using linearization techniques and incomplete versions of the U-statistic, a tractable "approximated" empirical likelihood program is solved yielding asymptotic confidence intervals for the risk index. An alternative "Euclidean likelihood program" is also considered, replacing the Kullback-Leibler distance involved in the empirical likelihood by the Euclidean distance. Both methodologies are tested on simulated data and applied to assess the risk due to the presence of methyl mercury in fish and other seafood.

  6. Empirical Likelihood-Based ANOVA for Trimmed Means

    Science.gov (United States)

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  7. Empirical Likelihood based Confidence Regions for first order parameters of a heavy tailed distribution

    CERN Document Server

    Worms, Julien

    2010-01-01

    Let $X_1, \\ldots, X_n$ be some i.i.d. observations from a heavy tailed distribution $F$, i.e. such that the common distribution of the excesses over a high threshold $u_n$ can be approximated by a Generalized Pareto Distribution $G_{\\gamma,\\sigma_n}$ with $\\gamma >0$. This work is devoted to the problem of finding confidence regions for the couple $(\\gamma,\\sigma_n)$ : combining the empirical likelihood methodology with estimation equations (close but not identical to the likelihood equations) introduced by J. Zhang (Australian and New Zealand J. Stat n.49(1), 2007), asymptotically valid confidence regions for $(\\gamma,\\sigma_n)$ are obtained and proved to perform better than Wald-type confidence regions (especially those derived from the asymptotic normality of the maximum likelihood estimators). By profiling out the scale parameter, confidence intervals for the tail index are also derived.

  8. Sieve likelihood ratio inference on general parameter space

    Institute of Scientific and Technical Information of China (English)

    SHEN Xiaotong; SHI Jian

    2005-01-01

    In this paper,a theory on sieve likelihood ratio inference on general parameter spaces(including infinite dimensional) is studied.Under fairly general regularity conditions,the sieve log-likelihood ratio statistic is proved to be asymptotically x2 distributed,which can be viewed as a generalization of the well-known Wilks' theorem.As an example,a emiparametric partial linear model is investigated.

  9. Empirical Likelihood Method for Quantiles with Response Data Missing at Random

    Institute of Scientific and Technical Information of China (English)

    Xia-yan LI; Jun-qing YUAN

    2012-01-01

    Empirical likelihood is a nonparametric method for constructing confidence intervals and tests,notably in enabling the shape of a confidence region determined by the sample data.This paper presents a new version of the empirical likelihood method for quantiles under kernel regression imputation to adapt missing response data.It eliminates the need to solve nonlinear equations,and it is easy to apply.We also consider exponential empirical likelihood as an alternative method. Numerical results are presented to compare our method with others.

  10. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are, respectiv......We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  11. Conditional likelihood inference in generalized linear mixed models.

    OpenAIRE

    Sartori, Nicola; Severini , T.A

    2002-01-01

    Consider a generalized linear model with a canonical link function, containing both fixed and random effects. In this paper, we consider inference about the fixed effects based on a conditional likelihood function. It is shown that this conditional likelihood function is valid for any distribution of the random effects and, hence, the resulting inferences about the fixed effects are insensitive to misspecification of the random effects distribution. Inferences based on the conditional likelih...

  12. Empirical Likelihood for Mixed-effects Error-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui

    2009-01-01

    This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.

  13. Penalized maximum likelihood estimation for generalized linear point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....

  14. Empirical Likelihood Approach for Treatment Effect in Pretest-Posttest Trial

    Institute of Scientific and Technical Information of China (English)

    Qixiang HE

    2012-01-01

    The empirical likelihood approach is suggested to the pretest-posttest trial based on the constrains,which we construct to summarize all the given information.The author obtains a log-empirical likelihood ratio test statistic that has a standard chi-squared limiting distribution.Thus,in making inferences,there is no need to estimate variance explicitly,and inferential procedures are easier to implement.Simulation results show that the approach of this paper is more efficient compared with ANCOVA Ⅱ due to the sufficient and appropriate use of information.

  15. Penalized maximum likelihood estimation for generalized linear point processes

    OpenAIRE

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we...

  16. Inverse Probability Weighted Generalised Empirical Likelihood Estimators : Firm Size and R&D Revisited

    NARCIS (Netherlands)

    Inkmann, J.

    2005-01-01

    The inverse probability weighted Generalised Empirical Likelihood (IPW-GEL) estimator is proposed for the estimation of the parameters of a vector of possibly non-linear unconditional moment functions in the presence of conditionally independent sample selection or attrition.The estimator is applied

  17. Empirical Likelihood Confidence Intervals for the Differences of Quantiles with Missing Data

    Institute of Scientific and Technical Information of China (English)

    Yong-song Qin; Yong-jiang Qian

    2009-01-01

    Suppose that there are two nonparametric populations x and y with missing data on both of them.We are interested in constructing confidence intervals on the quantile differences of x and y.Random imputation is used.Empirical likelihood confidence intervals on the differences are constructed.

  18. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  19. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  20. Adaptive quasi-likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    CHEN Xia; CHEN Xiru

    2005-01-01

    This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.

  1. Sparse-posterior Gaussian Processes for general likelihoods

    CERN Document Server

    Yuan,; Abdel-Gawad, Ahmed H; Minka, Thomas P

    2012-01-01

    Gaussian processes (GPs) provide a probabilistic nonparametric representation of functions in regression, classification, and other problems. Unfortunately, exact learning with GPs is intractable for large datasets. A variety of approximate GP methods have been proposed that essentially map the large dataset into a small set of basis points. Among them, two state-of-the-art methods are sparse pseudo-input Gaussian process (SPGP) (Snelson and Ghahramani, 2006) and variablesigma GP (VSGP) Walder et al. (2008), which generalizes SPGP and allows each basis point to have its own length scale. However, VSGP was only derived for regression. In this paper, we propose a new sparse GP framework that uses expectation propagation to directly approximate general GP likelihoods using a sparse and smooth basis. It includes both SPGP and VSGP for regression as special cases. Plus as an EP algorithm, it inherits the ability to process data online. As a particular choice of approximating family, we blur each basis point with a...

  2. Pseudo-empirical Likelihood-Based Method Using Calibration for Longitudinal Data with Drop-Out.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua; Chan, Kwun Chuen Gary

    2015-01-01

    In observational studies, interest mainly lies in estimation of the population-level relationship between the explanatory variables and dependent variables, and the estimation is often undertaken using a sample of longitudinal data. In some situations, the longitudinal data sample features biases and loss of estimation efficiency due to non-random drop-out. However, inclusion of population-level information can increase estimation efficiency. In this paper we propose an empirical likelihood-based method to incorporate population-level information in a longitudinal study with drop-out. The population-level information is incorporated via constraints on functions of the parameters, and non-random drop-out bias is corrected by using a weighted generalized estimating equations method. We provide a three-step estimation procedure that makes computation easier. Some commonly used methods are compared in simulation studies, which demonstrate that our proposed method can correct the non-random drop-out bias and increase the estimation efficiency, especially for small sample size or when the missing proportion is high. In some situations, the efficiency improvement is substantial. Finally, we apply this method to an Alzheimer's disease study.

  3. RUDUNDANCY OF EMPIRICAL LIKELIHOOD%经验似然的冗余性

    Institute of Scientific and Technical Information of China (English)

    祝丽萍

    2012-01-01

    提出经验似然的冗余性和偏冗余性的概念,讨论了相应的冗余性等价条件,将GMM的冗余性结果推广到经验似然估计,同时模拟实验结果也证实了经验似然的冗余性和偏冗余性对估计的影响.%The redundancy and the partially redundancy of empirical likelihood are introduced and the equivalent conditions of redundancy results are obtained. Then the redundancy results of GMM are extended. Simulation results show the influence of the redundancy of empirical likelihood on the efficiency of estimators.

  4. Empirical Likelihood Analysis of Longitudinal Data Involving Within-subject Correlation

    Institute of Scientific and Technical Information of China (English)

    Shuang HU; Lu LIN

    2012-01-01

    In this paper we use profile empirical likelihood to construct confidence regions for regression coefficients in partially linear model with longitudinal data.The main contribution is that the within-subject correlation is considered to improve estimation efficiency. We suppose a semi-parametric structure for the covariances of observation errors in each subject and employ both the first order and the second order moment conditions of the observation errors to construct the estimating equations.Although there are nonparametric estimators,the empirical log-likelihood ratio statistic still tends to a standard xp2 variable in distribution after the nuisance parameters are profiled away.A data simulation is also conducted.

  5. The empirical likelihood goodness-of-fit test for regression model

    Institute of Scientific and Technical Information of China (English)

    Li-xing ZHU; Yong-song QIN; Wang-li XU

    2007-01-01

    Goodness-of-fit test for regression modes has received much attention in literature. In this paper, empirical likelihood (EL) goodness-of-fit tests for regression models including classical parametric and autoregressive (AR) time series models are proposed. Unlike the existing locally smoothing and globally smoothing methodologies, the new method has the advantage that the tests are self-scale invariant and that the asymptotic null distribution is chi-squared. Simulations are carried out to illustrate the methodology.

  6. Inverse Probability Weighted Generalised Empirical Likelihood Estimators : Firm Size and R&D Revisited

    OpenAIRE

    2005-01-01

    The inverse probability weighted Generalised Empirical Likelihood (IPW-GEL) estimator is proposed for the estimation of the parameters of a vector of possibly non-linear unconditional moment functions in the presence of conditionally independent sample selection or attrition.The estimator is applied to the estimation of the firm size elasticity of product and process R&D expenditures using a panel of German manufacturing firms, which is affected by attrition and selection into R&D activities....

  7. Empirical likelihood confidence regions of the parameters in a partially linear single-index model

    Institute of Scientific and Technical Information of China (English)

    XUE Liugen; ZHU Lixing

    2005-01-01

    In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters. Our methods can also deal with the confidence region construction for the index in the pure single-index model. A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method.

  8. Semi-empirical Likelihood Confidence Intervals for the Differences of Quantiles with Missing Data

    Institute of Scientific and Technical Information of China (English)

    Yong Song QIN; Jun Chao ZHANG

    2009-01-01

    Detecting population (group) differences is useful in many applications, such as medical research. In this paper, we explore the probabilistic theory for identifying the quantile differences between two populations. ,Suppose that there are two populations x and y with missing data on both of them, where x is nonparametric and y is parametric. We are interested in constructing confidence intervals on the quantile differences of x and y. Random hot deck imputation is used to fill in missing data. Semi-empirical likelihood confidence intervals on the differences are constructed.

  9. THE ASYMPTOTIC DISTRIBUTIONS OF EMPIRICAL LIKELIHOOD RATIO STATISTICS IN THE PRESENCE OF MEASUREMENT ERROR

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying accuracies are obtained. Statistical inference should be based on the pooled samples. In this article, the authors also assumes that all the imperfect instruments are unbiased. They consider the problem of combining this information to make statistical tests for parameters more relevant. They define the empirical likelihood ratio functions and obtain their asymptotic distributions in the presence of measurement error.

  10. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  11. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    Science.gov (United States)

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd.

  12. An alternative empirical likelihood method in missing response problems and causal inference.

    Science.gov (United States)

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study.

  14. Generalized Correlation Coefficient Based on Log Likelihood Ratio Test Statistic

    Directory of Open Access Journals (Sweden)

    Liu Hsiang-Chuan

    2016-01-01

    Full Text Available In this paper, I point out that both Joe’s and Ding’s strength statistics can only be used for testing the pair-wise independence, and I propose a novel G-square based strength statistic, called Liu’s generalized correlation coefficient, it can be used to detect and compare the strength of not only the pair-wise independence but also the mutual independence of any multivariate variables. Furthermore, I proved that only Liu’s generalized correlation coefficient is strictly increasing on its number of variables, it is more sensitive and useful than Cramer’s V coefficient, in other words, Liu generalized correlation coefficient is not only the G-square based strength statistic, but also an improved statistic for detecting and comparing the strengths of deferent associations of any two or more sets of multivariate variables, moreover, this new strength statistic can also be tested by G2.

  15. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  16. Fast inference in generalized linear models via expected log-likelihoods.

    Science.gov (United States)

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  17. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik

    2008-01-01

    estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty...

  18. Diagnostic Measures for Nonlinear Regression Models Based on Empirical Likelihood Method%非线性回归模型的经验似然诊断

    Institute of Scientific and Technical Information of China (English)

    丁先文; 徐亮; 林金官

    2012-01-01

    经验似然方法已经被广泛用于线性模型和广义线性模型.本文基于经验似然方法对非线性回归模型进行统计诊断.首先得到模型参数的极大经验似然估计;其次基于经验似然研究了三种不同的影响曲率度量;最后通过一个实际例子,说明了诊断方法的有效性.%The empirical likelihood method has been extensively applied to linear regression and generalized linear regression models. In this paper, the diagnostic measures for nonlinear regression models are studied based on the empirical likelihood method. First, the maximum empirical likelihood estimate of the parameters are obtained. Then, three different measures of influence curvatures are studied. Last, real data analysis are given to illustrate the validity of statistical diagnostic measures.

  19. ARCH-M模型的经验似然估计%Empirical likelihood estimation for ARCH-M models

    Institute of Scientific and Technical Information of China (English)

    孙岩; 李元

    2012-01-01

    This paper deals with the metric of risk aversion.Test statistics are constructed by the empirical likelihood method.Under mild conditions,asymptotic x2 distributions of test statistics are obtained,based on which confidence regions for risk aversion are given.Simulations show that the empirical likelihood method behaves well.%基于ARCH-M模型研究市场总体风险厌恶的度量问题.首先应用经验似然方法构造了检验统计量,并在一定的条件下,证明了所构造的统计量的渐近分布为卡方分布,在此基础上构造了市场总体风险厌恶的置信区间.模拟结果表明,经验似然方法表现良好.

  20. Generalizing Terwilliger's likelihood approach: a new score statistic to test for genetic association

    OpenAIRE

    Hsu Li; Helmer Quinta; de Visser Marieke CH; Uitte de Willige Shirley; el Galta Rachid; Houwing-Duistermaat Jeanine J

    2007-01-01

    Abstract Background: In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. Results: By means of a simulation study...

  1. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  2. Asymptotic normality and strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YIN; Changming; ZHAO; Lincheng; WEI; Chengdong

    2006-01-01

    In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.

  3. An empirical likelihood ratio test robust to individual heterogeneity for differential expression analysis of RNA-seq.

    Science.gov (United States)

    Xu, Maoqi; Chen, Liang

    2016-10-21

    The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease.

  4. Asymptotic Properties of the Maximum Likelihood Estimate in Generalized Linear Models with Stochastic Regressors

    Institute of Scientific and Technical Information of China (English)

    Jie Li DING; Xi Ru CHEN

    2006-01-01

    For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.

  5. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  6. ASYMPTOTIC NORMALITY OF MAXIMUM QUASI-LIKELIHOOD ESTIMATORS IN GENERALIZED LINEAR MODELS WITH FIXED DESIGN

    Institute of Scientific and Technical Information of China (English)

    Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG

    2008-01-01

    In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.

  7. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution

    Directory of Open Access Journals (Sweden)

    Huanxin Zou

    2016-07-01

    Full Text Available The simple linear iterative clustering (SLIC method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD. Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images.

  8. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]Fuller, W. A., Measurement Error Models, New York: John Wiley & Sons Inc., 1987.[2]Carroll, R. J., Ruppert, D., Stefanski, L. W., Measurement Error in Nonlinear Models, New York: Chapman and Hall, 1995.[3]Wittes, J., Lakatos, E., Probstfied, J., Surrogate endpoints in clinical trails: Cardiovascular diseases, Statist,Med., 1989, 8: 415-425.[4]Buonaccorsi, J. P., Measurement error in the response in the general linear model, J. Amer. Statist. Assoc., 1996,91(434): 633-642.[5]Carroll, R. J., Stefanski, L. A., Approximate quasi-likelihood estimation in models with surrogate predictors, J.Amer. Statist. Assoc., 1990, 85: 652-663.[6]Pepe, M. S., Inference using surrogate outcome data and a validation sample, Biometrika, 1992, 79: 355-365.[7]Duncan, G., Hill, D., An investigations of the extent and consequences of measurement error in labor-economics survey data, Journal of Labor Economics, 1985, 3: 508-532.[8]Stefanski, L. A., Carrol, R. J., Conditional scores and optimal scores for generalized linear measurement error models, Biometrika, 1987, 74:703-716.[9]Carroll, R. J., Wand, M. P., Semiparametric estimation in logistic measure error models, J. Roy. Statist. Soc.,Ser B, 1991, 53: 652-663.[10]Pepe, M. S., Fleming, T. R., A general nonparametric method for dealing with errors in missing or surrogate covariate data, J. Amer. Statist. Assoc. 1991, 86:108-113.[11]Pepe, M. S., Reilly, M., Fleming, T. R., Auxiliary outcome data and the mean score method, J. Statist. Plan.Inference, 1994, 42: 137-160.[12]Reilly, M., Pepe, M. S., A mean score method for missing and auxiliary covariate data in regression models,Biometrika, 1995, 82: 299-314.[13]Carroll, R. J., Knickerbocker, R. K., Wang, C. Y., Dimension reduction in a semiparametric regression model with errors in covariates, The Annals of Statistics, 1995, 23: 161-181.[14]Sepanski, J. H., Lee, L. F., Semiparametric estimation of nonlinear error-in-variables models

  9. Two-sample density-based empirical likelihood tests for incomplete data in application to a pneumonia study.

    Science.gov (United States)

    Vexler, Albert; Yu, Jihnhee

    2011-07-01

    In clinical trials examining the incidence of pneumonia it is a common practice to measure infection via both invasive and non-invasive procedures. In the context of a recently completed randomized trial comparing two treatments the invasive procedure was only utilized in certain scenarios due to the added risk involved, and given that the level of the non-invasive procedure surpassed a given threshold. Hence, what was observed was bivariate data with a pattern of missingness in the invasive variable dependent upon the value of the observed non-invasive observation within a given pair. In order to compare two treatments with bivariate observed data exhibiting this pattern of missingness we developed a semi-parametric methodology utilizing the density-based empirical likelihood approach in order to provide a non-parametric approximation to Neyman-Pearson-type test statistics. This novel empirical likelihood approach has both a parametric and non-parametric components. The non-parametric component utilizes the observations for the non-missing cases, while the parametric component is utilized to tackle the case where observations are missing with respect to the invasive variable. The method is illustrated through its application to the actual data obtained in the pneumonia study and is shown to be an efficient and practical method. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Learning to Read Empirical Articles in General Psychology

    Science.gov (United States)

    Sego, Sandra A.; Stuart, Anne E.

    2016-01-01

    Many students, particularly underprepared students, struggle to identify the essential information in empirical articles. We describe a set of assignments for instructing general psychology students to dissect the structure of such articles. Students in General Psychology I read empirical articles and answered a set of general, factual questions…

  11. Learning to Read Empirical Articles in General Psychology

    Science.gov (United States)

    Sego, Sandra A.; Stuart, Anne E.

    2016-01-01

    Many students, particularly underprepared students, struggle to identify the essential information in empirical articles. We describe a set of assignments for instructing general psychology students to dissect the structure of such articles. Students in General Psychology I read empirical articles and answered a set of general, factual questions…

  12. Random Parameter Markov Population Process Models and Their Likelihood, Bayes and Empirical Bayes Analysis.

    Science.gov (United States)

    1985-09-01

    XW Dr. Douglas de Priest Statistics & Probability Program Code 411(SP) Office of Naval Research Arlington, VA 22217 Dr. Morris DeGroot Statistics...consult Morris (1983) for a review of parametric empirical Bayes methods, Robbins (1983) and in much previous work, has elucidated the non-parametric...South Street Morris Township, NJ 07960 Dr. David L. Wallace Statistics Dept. University of Chicago 5734 S. University Ave. Chicago, IL 60637 Dr. F

  13. ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS

    Institute of Scientific and Technical Information of China (English)

    YUE LI; CHEN XIRU

    2005-01-01

    For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.

  14. Generalized Likelihood Ratio Statistics and Uncertainty Adjustments in Efficient Adaptive Design of Clinical Trials

    CERN Document Server

    Bartroff, Jay

    2011-01-01

    A new approach to adaptive design of clinical trials is proposed in a general multiparameter exponential family setting, based on generalized likelihood ratio statistics and optimal sequential testing theory. These designs are easy to implement, maintain the prescribed Type I error probability, and are asymptotically efficient. Practical issues involved in clinical trials allowing mid-course adaptation and the large literature on this subject are discussed, and comparisons between the proposed and existing designs are presented in extensive simulation studies of their finite-sample performance, measured in terms of the expected sample size and power functions.

  15. 混合样本下的经验似然估计%Empirical likelihood estimator for mixing dependent samples

    Institute of Scientific and Technical Information of China (English)

    施明华; 赵建中; 周本达

    2012-01-01

      In this paper, under the condition of φ -mixing random variables,the empirical likelihood method is studied.We establish statistical inference and confidence intervals are obtained for the population mean and M-functional of {Xn} which be the strongly stationary φ -mixing random variable sequence with mean is constant and a non negative variance by the lagrange multiplier and some important probability inequalities.%  研究强平稳φ混合随机变量序列均值的经验似然估计问题,利用拉格朗日乘子以及一些重要概率不等式讨论均值有限且方差不等于零的强平稳φ混合序列,并给出其总体均值和 M-泛函统计推断以及置信区间

  16. Empirical likelihood based inference for second-order diffusion models%二阶扩散模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    王允艳; 张立新; 王汉超

    2012-01-01

    In this paper, we develop an empirical likelihood method to construct empirical likelihood estimators for nonparametric drift and diffusion functions in the second-order diffusion model, and the consistency and asymptotic normality of the empirical likelihood estimators are obtained. Moreover, the nonsymmetric confidence intervals for drift and diffusion functions based on empirical likelihood methods are obtained, and the adjusted empirical log-likelihood ratio is proved to be asymptotically standard chi-square under some mild conditions.%本文利用经验似然方法得到了二阶扩散模型的漂移系数和扩散系数的经验似然估计量,并研究这些估计量的相合性和渐近正态性.进一步在经验似然方法的基础上给出了漂移系数和扩散系数的非对称的置信区间,并且在一定的条件下证明了调整的对数似然比是渐近卡方分布的.

  17. Empirical likelihood for balanced ranked-set sampled data Dedicated to Professor Zhidong Bai on the occasion of his 65th birthday

    Institute of Scientific and Technical Information of China (English)

    LIU TianQing; LIN Nan; ZHANG BaoXue

    2009-01-01

    Ranked-set sampling (RSS) often provides more efficient inference than simple random sampling (SRS). In this article, we propose a systematic nonparametric technique, RSS-EL, for hypothesis testing and interval estimation with balanced RSS data using empirical likelihood (EL). We detail the approach for interval estimation and hypothesis testing in one-sample and two-sample problems and general estimating equations. In all three cases, RSS is shown to provide more efficient inference than SRS of the same size. Moreover, the RSS-EL method does not require any easily violated assumptions needed by existing rank-based nonparametric methods for RSS data, such as perfect ranking, identical ranking scheme in two groups, and location shift between two population distributions. The merit of the RSS-EL method is also demonstrated through simulation studies.

  18. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YUE Li; CHEN Xiru

    2004-01-01

    Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.

  19. Correlation structure and variable selection in generalized estimating equations via composite likelihood information criteria.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2016-06-30

    The method of generalized estimating equations (GEE) is popular in the biostatistics literature for analyzing longitudinal binary and count data. It assumes a generalized linear model for the outcome variable, and a working correlation among repeated measurements. In this paper, we introduce a viable competitor: the weighted scores method for generalized linear model margins. We weight the univariate score equations using a working discretized multivariate normal model that is a proper multivariate model. Because the weighted scores method is a parametric method based on likelihood, we propose composite likelihood information criteria as an intermediate step for model selection. The same criteria can be used for both correlation structure and variable selection. Simulations studies and the application example show that our method outperforms other existing model selection methods in GEE. From the example, it can be seen that our methods not only improve on GEE in terms of interpretability and efficiency but also can change the inferential conclusions with respect to GEE. Copyright © 2016 John Wiley & Sons, Ltd.

  20. New empirical generalizations on the determinants of price elasticity

    NARCIS (Netherlands)

    Bijmolt, THA; Van Heerde, HJ; Pieters, RGM

    The importance of pricing decisions for firms has fueled an extensive stream of research on price elasticities. In an influential meta-analytical study, Tellis (1988) summarized price elasticity research findings until 1986. However, empirical generalizations on price elasticity require

  1. Further Evaluation of Covariate Analysis using Empirical Bayes Estimates in Population Pharmacokinetics: the Perception of Shrinkage and Likelihood Ratio Test.

    Science.gov (United States)

    Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose

    2017-01-01

    Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.

  2. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.

  3. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    ZHANG SanGuo; LIAO Yuan

    2008-01-01

    In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.

  4. Incorporating Astrophysical Systematics into a Generalized Likelihood for Cosmology with Type Ia Supernovae

    Science.gov (United States)

    Ponder, Kara A.; Wood-Vasey, W. Michael; Zentner, Andrew R.

    2016-07-01

    Traditional cosmological inference using Type Ia supernovae (SNe Ia) have used stretch- and color-corrected fits of SN Ia light curves and assumed a resulting fiducial mean and symmetric intrinsic dispersion for the resulting relative luminosity. As systematics become the main contributors to the error budget, it has become imperative to expand supernova cosmology analyses to include a more general likelihood to model systematics to remove biases with losses in precision. To illustrate an example likelihood analysis, we use a simple model of two populations with a relative luminosity shift, independent intrinsic dispersions, and linear redshift evolution of the relative fraction of each population. Treating observationally viable two-population mock data using a one-population model results in an inferred dark energy equation of state parameter w that is biased by roughly 2 times its statistical error for a sample of N\\quad ≳ \\quad 2500 SNe Ia. Modeling the two-population data with a two-population model removes this bias at a cost of an approximately ˜ 20 % increase in the statistical constraint on w. These significant biases can be realized even if the support for two underlying SNe Ia populations, in the form of model selection criteria, is inconclusive. With the current observationally estimated difference in the two proposed populations, a sample of N\\quad ≳ \\quad 10,000 SNe Ia is necessary to yield conclusive evidence of two populations.

  5. Empirical Likelihood-Based Inference with Missing and Censored Data%含有截断和缺失数据的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    郑明; 杜玮

    2008-01-01

    本文将经验似然的方法应用到同时包含截断和缺失数据的情况.通过定义调整后的经验似然比,证明它服从x2分布.利用随机模拟,比较经验似然和正态方法的优劣.结果发现经验似然方法在很多情况下都优于正态方法.%In this paper, we investigate how to apply the empirical likelihood method to the mean in the presence of censoring and missing. We show that an adjusted empirical likelihood statistic follows a chi-square distribution. Some simulation studies are presented to compare the empirical likelihood method with the normal method. These results indicate that the empirical likelihood method works better than or equally to the normal method.

  6. Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test

    KAUST Repository

    Madakyaru, Muddu

    2017-02-16

    Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.

  7. Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes

    Institute of Scientific and Technical Information of China (English)

    Hong Chang HU; Lei SONG

    2014-01-01

    The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.

  8. Incorporating Astrophysical Systematics into a Generalized Likelihood for Cosmology with Type Ia Supernovae

    CERN Document Server

    Ponder, Kara A; Zentner, Andrew R

    2015-01-01

    Traditional cosmological inference using Type Ia supernovae (SNeIa) have used stretch- and color-corrected fits of SN Ia light curves and assumed a resulting fiducial mean and symmetric intrinsic dispersion to the resulting relative luminosity. However, the recent literature has presented mounting evidence that SNeIa have different width-color-corrected luminosities, depending on the environment in which they are found. Such correlations suggest the existence of multiple populations of SNeIa and a non-Gaussian distribution of relative luminosity. We introduce a framework that provides a generalized full-likelihood approach to accommodate multiple populations with unknown population parameters. To illustrate this framework we use a simple model of two populations with a relative shift, independent intrinsic dispersions, and linear redshift evolution of the relative fraction of each population. We generate mock SN Ia data sets from an underlying two-population model and use a Markov Chain Monte Carlo algorithm ...

  9. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS%m-相依误差情形线性模型中的经验似然方法

    Institute of Scientific and Technical Information of China (English)

    秦永松; 姜波; 黎玉芳

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors.It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  10. New empirical generalizations on the determinants of price elasticity

    NARCIS (Netherlands)

    Bijmolt, THA; Van Heerde, HJ; Pieters, RGM

    2005-01-01

    The importance of pricing decisions for firms has fueled an extensive stream of research on price elasticities. In an influential meta-analytical study, Tellis (1988) summarized price elasticity research findings until 1986. However, empirical generalizations on price elasticity require modification

  11. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    Science.gov (United States)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  12. THE GENERALIZED MAXIMUM LIKELIHOOD METHOD APPLIED TO HIGH PRESSURE PHASE EQUILIBRIUM

    Directory of Open Access Journals (Sweden)

    Lúcio CARDOZO-FILHO

    1997-12-01

    Full Text Available The generalized maximum likelihood method was used to determine binary interaction parameters between carbon dioxide and components of orange essential oil. Vapor-liquid equilibrium was modeled with Peng-Robinson and Soave-Redlich-Kwong equations, using a methodology proposed in 1979 by Asselineau, Bogdanic and Vidal. Experimental vapor-liquid equilibrium data on binary mixtures formed with carbon dioxide and compounds usually found in orange essential oil were used to test the model. These systems were chosen to demonstrate that the maximum likelihood method produces binary interaction parameters for cubic equations of state capable of satisfactorily describing phase equilibrium, even for a binary such as ethanol/CO2. Results corroborate that the Peng-Robinson, as well as the Soave-Redlich-Kwong, equation can be used to describe phase equilibrium for the following systems: components of essential oil of orange/CO2.Foi empregado o método da máxima verossimilhança generalizado para determinação de parâmetros de interação binária entre os componentes do óleo essencial de laranja e dióxido de carbono. Foram usados dados experimentais de equilíbrio líquido-vapor de misturas binárias de dióxido de carbono e componentes do óleo essencial de laranja. O equilíbrio líquido-vapor foi modelado com as equações de Peng-Robinson e de Soave-Redlich-Kwong usando a metodologia proposta em 1979 por Asselineau, Bogdanic e Vidal. A escolha destes sistemas teve como objetivo demonstrar que o método da máxima verosimilhança produz parâmetros de interação binária, para equações cúbicas de estado capazes de descrever satisfatoriamente até mesmo o equilíbrio para o binário etanol/CO2. Os resultados comprovam que tanto a equação de Peng-Robinson quanto a de Soave-Redlich-Kwong podem ser empregadas para descrever o equilíbrio de fases para o sistemas: componentes do óleo essencial de laranja/CO2.

  13. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Indian Academy of Sciences (India)

    Diego Rivera; Yessica Rivas; Alex Godoy

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  14. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  15. Selection properties of Type II maximum likelihood (empirical bayes) linear models with individual variance components for predictors

    NARCIS (Netherlands)

    Jamil, T.; Braak, ter C.J.F.

    2012-01-01

    Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the number of objects (N). One of the possible solution is the Relevance vector machine (RVM) which is a form of automatic relevance detection and has gained popularity in the pattern recognition machine l

  16. A family-based likelihood ratio test for general pedigree structures that allows for genotyping error and missing data.

    Science.gov (United States)

    Yang, Yang; Wise, Carol A; Gordon, Derek; Finch, Stephen J

    2008-01-01

    The purpose of this work is the development of a family-based association test that allows for random genotyping errors and missing data and makes use of information on affected and unaffected pedigree members. We derive the conditional likelihood functions of the general nuclear family for the following scenarios: complete parental genotype data and no genotyping errors; only one genotyped parent and no genotyping errors; no parental genotype data and no genotyping errors; and no parental genotype data with genotyping errors. We find maximum likelihood estimates of the marker locus parameters, including the penetrances and population genotype frequencies under the null hypothesis that all penetrance values are equal and under the alternative hypothesis. We then compute the likelihood ratio test. We perform simulations to assess the adequacy of the central chi-square distribution approximation when the null hypothesis is true. We also perform simulations to compare the power of the TDT and this likelihood-based method. Finally, we apply our method to 23 SNPs genotyped in nuclear families from a recently published study of idiopathic scoliosis (IS). Our simulations suggest that this likelihood ratio test statistic follows a central chi-square distribution with 1 degree of freedom under the null hypothesis, even in the presence of missing data and genotyping errors. The power comparison shows that this likelihood ratio test is more powerful than the original TDT for the simulations considered. For the IS data, the marker rs7843033 shows the most significant evidence for our method (p = 0.0003), which is consistent with a previous report, which found rs7843033 to be the 2nd most significant TDTae p value among a set of 23 SNPs.

  17. Estimating parameters of generalized integrate-and-fire neurons from the maximum likelihood of spike trains.

    Science.gov (United States)

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2011-11-01

    When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.

  18. 非线性半参数EV模型的最大经验似然估计%Maximum Empirical Likelihood Estimators in Nonlinear Semiparametric EV Regression Models

    Institute of Scientific and Technical Information of China (English)

    冯三营; 薛留根

    2012-01-01

    考虑非参数协变量带有测量误差(EV)的非线性半参数模型,在测量误差分布为普通光滑分布时,利用经验似然方法,给出了回归系数,光滑函数以及误差方差的最大经验似然估计.在一定条件下证明了所得估计量的渐近正态性和相合性.最后通过数值模拟研究了所提估计方法在有限样本下的实际表现.%In this paper, we consider the nonlinear semiparametric models with measurement error in the nonparametric part. When the error is ordinarily smooth, we obtain the maximum empirical likelihood estimators of regression coefficient, smooth function and error variance by using the empirical likelihood method. The asymptotic normality and consistency of the proposed estimators are proved under some appropriate conditions. Finite sample performance of the proposed method is illustrated in a simulation study.

  19. Topologies of the conditional ancestral trees and full-likelihood-based inference in the general coalescent tree framework.

    Science.gov (United States)

    Sargsyan, Ori

    2010-08-01

    The general coalescent tree framework is a family of models for determining ancestries among random samples of DNA sequences at a nonrecombining locus. The ancestral models included in this framework can be derived under various evolutionary scenarios. Here, a computationally tractable full-likelihood-based inference method for neutral polymorphisms is presented, using the general coalescent tree framework and the infinite-sites model for mutations in DNA sequences. First, an exact sampling scheme is developed to determine the topologies of conditional ancestral trees. However, this scheme has some computational limitations and to overcome these limitations a second scheme based on importance sampling is provided. Next, these schemes are combined with Monte Carlo integrations to estimate the likelihood of full polymorphism data, the ages of mutations in the sample, and the time of the most recent common ancestor. In addition, this article shows how to apply this method for estimating the likelihood of neutral polymorphism data in a sample of DNA sequences completely linked to a mutant allele of interest. This method is illustrated using the data in a sample of DNA sequences at the APOE gene locus.

  20. Statistical Inference for Autoregressive Conditional Duration Models Based on Empirical Likelihood%基于经验似然的自回归条件久期模型的统计推断

    Institute of Scientific and Technical Information of China (English)

    韩玉; 金应华; 吴武清

    2013-01-01

    利用经验似然方法对自回归条件久期(ACD)模型参数进行统计检验,给出了自回归条件久期模型参数的经验似然比统计量,并证明了该统计量渐近服从x2-分布.数值模拟结果表明,经验似然方法优于拟似然方法.%This paper solves the statistical test problem of an autoregressive conditional duration (ACD) models based on an empirical likelihood method. We construct the log empirical likelihood ratio statistics for the parameters of ACD model, it is showed that the proposed statistics asymptotically follows an χ2-distribution. A numerical simulation demonstrates that the performance of the empirical likelihood method are better than that of the quasi-likelihood method.

  1. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  2. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    Science.gov (United States)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  3. Strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YiN; Changming; ZHAO; Lincheng

    2005-01-01

    In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni=1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.

  4. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    Science.gov (United States)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  5. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  6. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  7. Likelihood Inference under Generalized Hybrid Censoring Scheme with Comp eting Risks

    Institute of Scientific and Technical Information of China (English)

    MAO Song; SHI Yi-min

    2016-01-01

    Statistical inference is developed for the analysis of generalized type-II hybrid censoring data under exponential competing risks model. In order to solve the problem that approximate methods make unsatisfactory performances in the case of small sample size, we establish the exact conditional distributions of estimators for parameters by conditional moment generating function(CMGF). Furthermore, confidence intervals(CIs) are constructed by exact distributions, approximate distributions as well as bootstrap method respectively, and their performances are evaluated by Monte Carlo simulations. And finally, a real data set is analyzed to illustrate all the methods developed here.

  8. An asymptotic approximation of the marginal likelihood for general Markov models

    CERN Document Server

    Zwiernik, Piotr

    2010-01-01

    The standard Bayesian Information Criterion (BIC) is derived under regularity conditions which are not always satisfied by the graphical models with hidden variables. In this paper we derive the BIC score for Bayesian networks in the case of binary data and when the underlying graph is a rooted tree and all the inner nodes represent hidden variables. This provides a direct generalization of a similar formula given by Rusakov and Geiger for naive Bayes models. The main tool used in this paper is a connection between asymptotic approximation of Laplace integrals and the real log-canonical threshold.

  9. Application of a generalized likelihood ratio test statistic to MAGIC data

    CERN Document Server

    Klepser, S; 10.1063/1.4772359

    2012-01-01

    The commonly used detection test statistic for Cherenkov telescope data is Li & Ma (1983), Eq. 17. It evaluates the compatibility of event counts in an on-source region with those in a representative off-region. It does not exploit the typically known gamma-ray point spread function (PSF) of a system, and in practice its application requires either assumptions on the symmetry of the acceptance across the field of view, orMonte Carlo simulations.MAGIC has an azimuth-dependent, asymmetric acceptance which required a careful review of detection statistics. Besides an adapted Li & Ma based technique, the recently presented generalized LRT statistic of [1] is now in use. It is more flexible, more sensitive and less systematics-affected, because it is highly customized for multi-pointing Cherenkov telescope data with a known PSF. We present the application of this new method to archival MAGIC data and compare it to the other, Li&Ma-based method.

  10. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  11. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547

  12. EMPIRICAL LIKELIHOOD DIMENSION REDUCTION INFERENCE IN NONLINEAR EV MODELS WITH VALIDATION DATA%核实数据下非线性EV模型中经验似然降维推断

    Institute of Scientific and Technical Information of China (English)

    方连娣; 胡凤霞

    2012-01-01

    In the article, we consider the nonlinear error-in-response models with the help of validation data. Using semiparametric dimension reduction to construct the estimated empirical likelihood and adjusted empirical likelihood of the unkown parameter, it is shown that the estimated empirical log-likelihood has the asymptotics weighted sum of chi-square variables distribution and adjusted empirical log-likelihood has the asymptotic standard chi-square distribution. The result can be used to construct the confidence regions of the unknown parameter.%本文研究了响应变量有误差的非线性模型.应用半参数降维技术构造未知参数的被估计经验似然及调整的经验似然,证明了所提出的被估计的经验对数似然与其调整的经验对数似然分别渐近于独立卡方变量加权和的分布与标准卡方分布,所得结果可用来构造未知参数的置信域.

  13. Generalized domains for empirical evaluations in reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.; Tanner, B.; Taylor, M.E.; Stone, P.

    2009-01-01

    Many empirical results in reinforcement learning are based on a very small set of environments. These results often represent the best algorithm parameters that were found after an ad-hoc tuning or fitting process. We argue that presenting tuned scores from a small set of environments leads to metho

  14. Generalized domains for empirical evaluations in reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.; Tanner, B.; Taylor, M.E.; Stone, P.

    2009-01-01

    Many empirical results in reinforcement learning are based on a very small set of environments. These results often represent the best algorithm parameters that were found after an ad-hoc tuning or fitting process. We argue that presenting tuned scores from a small set of environments leads to

  15. Generalized domains for empirical evaluations in reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.; Tanner, B.; Taylor, M.E.; Stone, P.

    2009-01-01

    Many empirical results in reinforcement learning are based on a very small set of environments. These results often represent the best algorithm parameters that were found after an ad-hoc tuning or fitting process. We argue that presenting tuned scores from a small set of environments leads to metho

  16. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  17. Empirical Equivalence, Artificial Gauge Freedom and a Generalized Kretschmann Objection

    CERN Document Server

    Pitts, J Brian

    2009-01-01

    Einstein considered general covariance to characterize the novelty of his General Theory of Relativity (GTR), but Kretschmann thought it merely a formal feature that any theory could have. The claim that GTR is "already parametrized" suggests analyzing substantive general covariance as formal general covariance achieved without hiding preferred coordinates as scalar "clock fields," much as Einstein construed general covariance as the lack of preferred coordinates. Physicists often install gauge symmetries artificially with additional fields, as in the transition from Proca's to Stueckelberg's electromagnetism. Some post-positivist philosophers, due to realist sympathies, are committed to judging Stueckelberg's electromagnetism distinct from and inferior to Proca's. By contrast, physicists identify them, the differences being gauge-dependent and hence unreal. It is often useful to install gauge freedom in theories with broken gauge symmetries (second-class constraints) using a modified Batalin-Fradkin-Tyutin (...

  18. Application of a generalized likelihood function for parameter inference of a carbon balance model using multiple, joint constraints

    Science.gov (United States)

    Hammerle, Albin; Wohlfahrt, Georg; Schoups, Gerrit

    2014-05-01

    Advances in automated data collection systems enabled ecologists to collect enormous amounts of varied data. Data assimilation (or data model synthesis) is one way to make sense of this mass of data. Given a process model designed to learn about ecological processes these data can be integrated within a statistical framework for data interpretation and extrapolation. Results of such a data assimilation framework clearly depend on the information content of the observed data, on the associated uncertainties (data uncertainties, model structural uncertainties and parameter uncertainties) and underlying assumptions. Parameter estimation is usually done by minimizing a simple least squares objective function with respect to the model parameters - presuming Gaussian, independent and homoscedastic errors (formal approach). Recent contributions to the (ecological) literature, however, have questioned the validity of this approach when confronted with significant errors and uncertainty in the model forcing (inputs) and model structure. Very often residual errors are non-Gaussian, correlated and heteroscedastic. Thus these error sources have to be considered and residual-errors have to be described in a statistically correct fashion order to draw statistically sound conclusions about parameter- and model predictive-uncertainties. We examined the effects of a generalized likelihood (GL) function on the parameter estimation of a carbon balance model. Compared with the formal approach, the GL function allows for correlation, non-stationarity and non-normality of model residuals. Carbon model parameters have been constrained using three different datasets, each of them modelled by its own GL function. As shown in literature the use of different datasets for parameter estimation reduces the uncertainty in model parameters and model predictions and does allow for a better quantification and for more insights into model processes.

  19. A Comparison Between the Empirical Logistic Regression Method and the Maximum Likelihood Estimation Method%经验 logistic 回归方法与最大似然估计方法的对比分析

    Institute of Scientific and Technical Information of China (English)

    张婷婷; 高金玲

    2014-01-01

    针对logistic回归中最大似然估计法的迭代算法求解困难的问题,从理论和实例运用的两个角度寻找到一种简便估计法,即经验logistic回归。分析结果表明,在样本容量很大的情况下经验logistic回归方法比最大似然估计方法更具备良好的科学性和实用性,并且两种方法对同一组资料的分析结果一致,而经验logistic回归更简单,此结果对于实际工作者来说非常重要。%In this paper , the empirical logistic regression method and the maximum likelihood estimation method were analyzed in detail by illustrating in theory , and the two methods were compared with correlation a-nalysis from scientific and practical .Analysis results show that , under the condition of the sample size is very big , empirical logistic regression method is better than maximum likelihood estimation method in respect of scientific and practical , at the same time , they are the same consequence .However , empirical logistic regression method is easier than maximum likelihood estimation method , which is very important to practical workers .

  20. 协变量随机缺失下线性模型的经验似然推断及其应用%Empirical Likelihood for Linear Models with Covariate Data Missing at Random

    Institute of Scientific and Technical Information of China (English)

    杨宜平

    2011-01-01

    Linear models with covariate data missing at random are considered, the weighted empirical likelihood and the imputed empirical likelihood are proposed. The proposed empirical log-likelihood ratios are proven to be asymptotically chi-squared, and the corresponding confidence regions for the regression coefficients are then constructed. The finite sample behavior of the proposed method is evaluated with simulation study, and a real example is analyzed.%考虑协变量带有缺失的线性模型,提出了加权的经验似然方法和借补的经验似然方法,证明了所提出的经验对数似然比渐近于x^2分布,由此构造回归系数的置信域。模拟研究了所提出方法的有限样本性质,并进行了实例分析。

  1. General Election 2004: Empirical Validation of Voting Pattern in Malaysia

    Directory of Open Access Journals (Sweden)

    Syed Arabi Idid

    2007-06-01

    Full Text Available Abstract: The purpose of this study is to test the effects of the politically related socio-economic issues, personality of the new Prime Minister and the perceived strength of the ruling party, Barisan Nasional (BN, in influencing the outcomes of elections. It uses the data from the Star-IIUM Survey 2004 and the official election results of general election 2004 for the three northern states of Malaysia and applies the Structural Equation Modeling (SEM. The study found that the personal attributes of the Prime Minister, the strength of the ruling party and the campaign issues positively influenced the popular votes secured by the BN candidates.

  2. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  3. Empirical Likelihood Inference of Parameters in a Censored Nonlinear Semiparametric Regression Model%删失数据下非线性半参数回归模型中参数的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    侯文; 宋立新; 黄玉洁

    2012-01-01

    考察了响应变量在随机删失情形下的非线性半参数回归模型,构造了未知参数的经验对数似然比统计量和调整经验对数似然比统计量,证明在一定条件下,所构造的经验似然比统计量渐近于x2分布,并由此构造出未知参数的置信域.此外,又构造了未知参数的最小二乘估计量,证明了它的渐近性质.通过模拟研究表明,经验似然方法在置信域的覆盖概率以及精度方面要优于最小二乘法.%In this paper, a censored nonlinear semiparametric regression model is investigated. Empirical log-likelihood ratio statistics and adjust empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is shown that the proposed statistics have asymptotically chi-squared distribution under some mild conditions, and hence it can be used to construct the confidence region of the unknown parameter. In addition the least squares estimator of unknown parameter is constructed, and its asymptotic behavior is proved. A simulation study is carried out to show the empirical likelihood methods appears to be better than the least-squares method in terms of the confidence regions and its coverage probabilities.

  4. Empirical Likelihood Confidence Regions for Density Functions at a Finite Number of Different Points under Associated Samples%相协样本下密度函数在有限个点处的联合经验似然置信域

    Institute of Scientific and Technical Information of China (English)

    朱海江

    2015-01-01

    In this paper,we consider to apply the empirical likelihood method to a probability density function under an associated sample. It is shown that the empirical likelihood ratio statistic is asymptotically χ2-type distributed under some mild conditions. The result is used to construct empirical likelihood-based empirical likelihood on the probability density function.%研究相协样本下密度函数在有限个不同点上的联合经验似然置信域的构造,证明了相协样本下密度函数在r个不同点上的联合经验似然比统计量的极限分布为χr2,由此结果构造密度函数在r个不同点上的联合经验似然置信域。

  5. Multiple Linear Regressions by Maximizing the Likelihood under Assumption of Generalized Gauss-Laplace Distribution of the Error.

    Science.gov (United States)

    Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D

    2016-01-01

    Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.

  6. Empirical Likelihood Confidence Intervals of Conditional Quantile Under Missing Response Variable%响应变量缺失时条件分位数的经验似然置信区间

    Institute of Scientific and Technical Information of China (English)

    曹添建; 凌能祥

    2012-01-01

    In this paper,the confidence interval of conditional quantile in the absence and presence of some anxiliary information is given on the basis of empirical likelihood when response variable satisfies the random missing mechanism. It is also shown that the asymptotic efficacy of test is not be reduced as the information is added. It extends the results in the related literature.%本文利用经验似然的思想,分别构造在响应变量满足随机缺失(MAR)机制的条件下,不含附加信息和含附加信息时条件分位数的置信区间,并说明检验的渐近功效随信息量的增加而非降,推广了现有文献中的相应结果.

  7. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    Directory of Open Access Journals (Sweden)

    Seung Oh Lee

    2013-10-01

    Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.

  8. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  9. Extended Network Generalized Entanglement Theory: therapeutic mechanisms, empirical predictions, and investigations.

    Science.gov (United States)

    Hyland, Michael E

    2003-12-01

    Extended Network Generalized Entanglement Theory (Entanglement Theory for short) combines two earlier theories based on complexity theory and quantum mechanics. The theory's assumptions are: the body is a complex, self-organizing system (the extended network) that self-organizes so as to achieve genetically defined patterns (where patterns include morphologic as well as lifestyle patterns). These pattern-specifying genes require feedback that is provided by generalized quantum entanglement. Additionally, generalized entanglement has evolved as a form of communication between people (and animals) and can be used in healing. Entanglement Theory suggests that several processes are involved in complementary and alternative medicine (CAM). Direct subtle therapy creates network change either through lifestyle management, some manual therapies, and psychologically mediated effects of therapy. Indirect subtle therapy is a process of entanglement with other people or physical entities (e.g., remedies, healing sites). Both types of subtle therapy create two kinds of information within the network--either that the network is more disregulated than it is and the network then compensates for this error, or as a guide for network change leading to healing. Most CAM therapies involve a combination of indirect and direct therapies, making empirical evaluation complex. Empirical predictions from this theory are contrasted with those from two other possible mechanisms of healing: (1) psychologic processes and (2) mechanisms involving electromagnetic influence between people (biofield/energy medicine). Topics for empirical study include a hyperfast communication system, the phenomenology of entanglement, predictors of outcome in naturally occurring clinical settings, and the importance of therapist and patient characteristics to outcome.

  10. A General Law of Moment Convergence Rates for Uniform Empirical Process

    Institute of Scientific and Technical Information of China (English)

    Qing Pei ZANG

    2011-01-01

    Let {Xn; n ≥ 1} be a sequence of independent and identically distributed U[0,1]-distributed random variables.Define the uniform empirical process Fn(t)=n-1/2∑ni=1(I{Xi≤t} -t),0 ≤ t ≤ 1,‖Fn‖ =sup0≤t≤1 |Fn(t)|.In this paper,the exact convergence rates of a general law of weighted infinite series of E{‖Fn‖ - εgs(n)}+ are obtained.

  11. Aggregation, Validation, and Generalization of Qualitative Data - Methodological and Practical Research Strategies Illustrated by the Research Process of an empirically Based Typology.

    Science.gov (United States)

    Weis, Daniel; Willems, Helmut

    2017-06-01

    The article deals with the question of how aggregated data which allow for generalizable insights can be generated from single-case based qualitative investigations. Thereby, two central challenges of qualitative social research are outlined: First, researchers must ensure that the single-case data can be aggregated and condensed so that new collective structures can be detected. Second, they must apply methods and practices to allow for the generalization of the results beyond the specific study. In the following, we demonstrate how and under what conditions these challenges can be addressed in research practice. To this end, the research process of the construction of an empirically based typology is described. A qualitative study, conducted within the framework of the Luxembourg Youth Report, is used to illustrate this process. Specifically, strategies are presented which increase the likelihood of generalizability or transferability of the results, while also highlighting their limitations.

  12. Analytic Methods for Cosmological Likelihoods

    OpenAIRE

    Taylor, A. N.; Kitching, T. D.

    2010-01-01

    We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...

  13. 缺失数据下线性EV模型均值的经验似然推断%Empirical Likelihood Inference for the Mean of a Linear EV Models with Missing Data

    Institute of Scientific and Technical Information of China (English)

    魏成花; 胡锡健

    2014-01-01

    The missing response problem in the linear EV (error-in-varibles)models is investigated. Using the inverse probability-Weighted method,the empirical log-likelihood ratio statistics for the Mean in the model is constructed.It is proved that the proposed statistics are asymptotically chi-square under some suitable conditions.With this results,the confidence region for the Mean can be constructed.%考虑了响应变量随机缺失情形下的线性EV模型,利用逆概率加权的方法构造响应变量均值的经验对数似然比统计量,在一定条件下证明了所构造的经验对数似然比统计量渐近于卡方分布,利用这个结果可构造均值的置信域。

  14. 缺失数据下非线性EV模型参数的经验似然置信域%Empirical Likelihood Confidence Regions of Parameters in Nonlinear EV Models under Missing Data

    Institute of Scientific and Technical Information of China (English)

    刘强; 薛留根

    2012-01-01

    The missing response problem in the nonlinear EV(error-in-variables) models is considered, where the explanatory variate X is erroneously measured. With the help of validation data, two empirical log-likelihood ratio statistics for the unknown parameters in the model are proposed. It is proved that the proposed statistics are asymptotically chi-square distribution under some mild conditions, and hence can be used to constructing the confidence regions of the parameters.%考虑解释变量带有测量误差且响应变量随机缺失情形下的非线性EV模型.通过利用核实数据,构造了未知参数的两种经验对数似然比统计量.证明了所构造统计量的分布渐近于x2分布,所得结果可以用来构造未知参数的渐近置信域.

  15. Phylogenetic estimation with partial likelihood tensors

    CERN Document Server

    Sumner, J G

    2008-01-01

    We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.

  16. Determinants of general dentists' decisions to accept capitation payment: a conceptual model and empirical estimates.

    Science.gov (United States)

    Conrad, Douglas; Lee, Rosanna; Milgrom, Peter; Huebner, Colleen

    2009-06-01

    Shifts in payment options for dental care over several decades have resulted in more dental expenditures being paid through health maintenance organizations (HMOs), preferred provider organizations (PPOs), and capitation arrangements. Patients' and employers' choices to participate in these arrangements is determined in part by dentists' willingness to participate in plans, and plan choices may be influenced by patient satisfaction, self-reported oral health, and/or quality or cost of care. This study examined determinants of dentists' decisions to accept capitation payment for services. Cross-sectional mail survey in December 2006. 1605 general dentists in Oregon. Questions addressed dentists' perceptions of the importance of control over various practice parameters, willingness to accept capitation payment, employment or ownership status within the practice, and practice characteristics. Capitation was accepted by 22.6% of the respondent dentists (n = 729). Reported average fees (2007 dollars) ranged from $60 (initial oral examination) to approximately $800 (porcelain crowns). The likelihood of accepting capitation payment was related to the number of dentists in the practice, but surprisingly owner-dentists were no less likely than employee-dentists (associates) to accept capitation. As expected, dentists' usual and customary fees were negatively associated with accepting capitation. In contrast, measures of dentists' importance of control were not related to decisions about capitation. Longer average appointment delays were related to acceptance of capitation, but the effects were small. Dentists' behavior regarding payment acceptance is generally consistent with microeconomic theory of provider behavior. Study findings should inform practitioners, plan managers, and researchers in examining dentist payment decisions.

  17. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  18. Empirical likelihood inferences for longitudinal varying coefficient partially linear error-in-variables models with missing responses%响应变量缺失时纵向数据下变系数部分线性测量误差模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    张明峰; 柳泽慧; 周小双

    2015-01-01

    Empirical likelihood inferences for the parameter component in varying coefficient partially linear errors-in-variables models with longitudinal data and missing responses were investigated.A corrected-attenuation block empirical likelihood procedure was used to estimate the unknown parameter vector,and a corrected-attenuation block empirical log-likelihood ratio statistic was suggested and its asymptotic distribution was obtained.Simulation results indicate that our proposed method performs better than the method based on normal approximations in terms of relatively higher cov-erage probabilities and smaller confidence regions.%将经验似然方法应用于响应变量缺失时纵向数据下变系数部分线性测量模型中兴趣参数置信域,构造了关于参数分量纠衰的分块经验对数似然比函数,进而推导出参数分量纠衰的分块经验似然比统计量及其渐近分布。数据模拟结果表明所提出的经验似然方法在置信区间长度和覆盖率方面要优于正态逼近方法。

  19. Induction as an Empirical Problem: How Students Generalize during Practical Work.

    Science.gov (United States)

    Wickman, Per-Olof; Ostman, Leif

    2002-01-01

    Examines how university students made generalizations when making morphological insect observations. Results showed students rarely made generalizations in terms of universal statements: they did not use induction or produce hypotheses for testing in an analytic philosophical sense but used induction in more familiar contexts. Discusses the…

  20. Induction as an empirical problem: how students generalize during practical work

    Science.gov (United States)

    Wickman, Per-Olof

    2002-05-01

    We examined how university students made generalizations when making morphological observations of insects. Five groups of two or three students working together were audio recorded. The results were analysed by an approach based on the work of Wittgenstein and on a pragmatic and sociocultural perspective. Results showed that students rarely made generalizations in terms of universal statements and they did not use induction or produced hypotheses for testing in an analytic philosophical sense. The few generalizations they made of this kind were taken from zoological authorities like textbooks or lectures. However, students used induction when in more familiar contexts. Moreover, when generalizations were analysed in the sense of Dewey, it became evident that students are fully capable of making generalizations by transferring meaning from one experience to another. The implications of these results for using induction and hypotheses testing in instruction are discussed.

  1. The Change Point Identification of Poisson Process Based on the GeneralizedLikelihood Ratio%基于广义似然比的泊松过程变点识别

    Institute of Scientific and Technical Information of China (English)

    赵俊

    2012-01-01

    在广义似然比(General Likelihood Ratio.GLR)的基础上,作者提出了参数未知条件下的基于GLR的泊松(Poisson)过程变点(change point)识别模型,仿真实验给出了此模型对于变点识别的性能和可靠度,在假设过程中仅存在一个变点的条件下,可以同时得到过程受控数据集用以估计过程参数.%Based on the generalized likelihood ratio (GLR) method, a GLR-based model for identifying the change point in Poisson processes is proposed with unknown parameter. The simulation experiment gives the reliability and performance of this model for the change point identification. Under the assumption of there is only one change point in the process, the in-control dataset can be obtained and used for parameter estimation.

  2. Maximum likelihood estimates of pairwise rearrangement distances.

    Science.gov (United States)

    Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R

    2017-06-21

    Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Moral hazard and supplier-induced demand: empirical evidence in general practice.

    NARCIS (Netherlands)

    Dijk, C.E. van; Berg, B. van den; Verheij, R.A.; Spreeuwenberg, P.; Groenewegen, P.P.; Bakker, D.H. de

    2013-01-01

    Changes in cost sharing and remuneration system in the Netherlands in 2006 led to clear changes in financial incentives faced by both consumers and general practitioner (GPs). For privately insured consumers, cost sharing was abolished, whereas those socially insured never faced cost sharing. The se

  4. The Gender-Linked Language Effect: An Empirical Test of a General Process Model

    Science.gov (United States)

    Mulac, Anthony; Giles, Howard; Bradac, James J.; Palomares, Nicholas A.

    2013-01-01

    The gender-linked language effect (GLLE) is a phenomenon in which transcripts of female communicators are rated higher on Socio-Intellectual Status and Aesthetic Quality and male communicators are rated higher on Dynamism. This study proposed and tested a new general process model explanation for the GLLE, a central mediating element of which…

  5. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  6. Maximum empirical likelihood estimation in nonlinear semiparametric regression models with missing data%随机缺失下半参数回归模型的最大经验似然估计

    Institute of Scientific and Technical Information of China (English)

    武大勇; 李锋

    2015-01-01

    The linear semiparametric regression models with missing data were considered.The maximum empirical es-timations of the regression coefficients,and the smoothing function were obtained by the maximum empirical method. The asymptotic normality and consistency of the proposed estimations were proved under some appropriate conditions.%考虑了随机缺失数据下非线性回归模型的估计问题,利用最大经验似然估计的方法给出了回归系数、光滑函数的最大经验似然估计,并在一定条件下证明了所得估计量的渐近正态性和强相合性。

  7. 基于广义似然比的小波域 SAR 图像相干斑抑制算法%Generalized Likelihood Ratio Based SAR Image Speckle Suppression Algorithm in Wavelet Domain

    Institute of Scientific and Technical Information of China (English)

    侯建华; 刘欣达; 陈稳; 陈少波

    2015-01-01

    A Bayes shrinkage formula is derived under the framework of joint detection and estimation theory, and a wavelet SAR image despeckling algorithm is realized based on generalized likelihood ratio.Firstly, redundant wavelet transform is performed directly to the original speckled SAR images, and binary mask is obtained for each wavelet coefficient.We use scale exponential distribution and Gamma distribution, respectively, to model the likelihood conditional probability of speckle noise and useful signal.According to the mask, the parameters of the two modes are estimated by maximum likelihood estimation method, and thus the likelihood conditional probability ratio is calculated.Experiment results show that the proposed method can effectively filter the speckle noise, and at the same time preserve the image details as possible.Satisfactory results are achieved on both synthetically speckled images and real SAR images.%在联合检测与估计理论框架下推导出了Bayes萎缩函数表达式,并提出了一种基于广义似然比的小波域SAR图像去斑算法。该算法对含斑SAR图像直接做冗余小波变换,求出小波系数所对应的二值掩模;对相干斑噪声和有用信号的似然条件概率分别建模为尺度指数分布和Gamma分布,根据二值掩模信息,采用最大似然估计得到两种模型的参数并计算似然条件概率比。实验结果表明:文中所给算法在有效滤除斑点噪声的同时,也较好地保持了图像的细节信息,在对人工加斑图像和多幅实际SAR图像的处理中获得了令人满意的结果。

  8. Weak Consistency and Convergence Rate of Quasi -Maximum Likelihood Estimated in Generalized Linear Models%广义线性模型中拟似然估计的弱相合性及收敛速度

    Institute of Scientific and Technical Information of China (English)

    邓春亮; 胡南辉

    2012-01-01

    在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性,并得到其收敛于真值βo的速度为Op(λn^-1/2),其中λn(λ^-n)为方阵Sn=n∑i=1XiX^11的最小(最大)特征值.%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els (GLMs). Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp(λn^-1/2),λn(^λn) which denotes the smallest (Maximum)eigervalue of the matrixSn =n∑i=1XiX^11,

  9. E-Commerce Adoption at Customer Level in Jordan: an Empirical Study of Philadelphia General Supplies

    Directory of Open Access Journals (Sweden)

    Mohammed Al Masarweh

    2016-11-01

    Full Text Available E-commerce in developing countries has been studied by numerous researchers during the last decade and a number of common and culturally specific challenges have been identified.. This study considers Jordan as a case study of a developing country where E-commerce is still in its infancy. Therefore, this research work comes as a complement to previous research and an opportunity to refine E-commerce adaptation research. This research was conducted by survey distributed randomly across branches of Philadelphia General Supplies (PGS, a small and medium enterprise (SME. The key findings in this research indicated that Jordanian society is moving towards online shopping at very low rates of adoption, due to barriers including weak infrastructure throughout the country except in the capital, societal trends and culture and educational and computer literacy. This means that E-commerce in Jordan still remains an under-developed industry.

  10. Gender, general theory of crime and computer crime: an empirical test.

    Science.gov (United States)

    Moon, Byongook; McCluskey, John D; McCluskey, Cynthia P; Lee, Sangwon

    2013-04-01

    Regarding the gender gap in computer crime, studies consistently indicate that boys are more likely than girls to engage in various types of computer crime; however, few studies have examined the extent to which traditional criminology theories account for gender differences in computer crime and the applicability of these theories in explaining computer crime across gender. Using a panel of 2,751 Korean youths, the current study tests the applicability of the general theory of crime in explaining the gender gap in computer crime and assesses the theory's utility in explaining computer crime across gender. Analyses show that self-control theory performs well in predicting illegal use of others' resident registration number (RRN) online for both boys and girls, as predicted by the theory. However, low self-control, a dominant criminogenic factor in the theory, fails to mediate the relationship between gender and computer crime and is inadequate in explaining illegal downloading of software in both boy and girl models. Theoretical implication of the findings and the directions for future research are discussed.

  11. Moral hazard and supplier-induced demand: empirical evidence in general practice.

    Science.gov (United States)

    van Dijk, Christel E; van den Berg, Bernard; Verheij, Robert A; Spreeuwenberg, Peter; Groenewegen, Peter P; de Bakker, Dinny H

    2013-03-01

    Changes in cost sharing and remuneration system in the Netherlands in 2006 led to clear changes in financial incentives faced by both consumers and general practitioner (GPs). For privately insured consumers, cost sharing was abolished, whereas those socially insured never faced cost sharing. The separate remuneration systems for socially insured consumers (capitation) and privately insured consumers (fee-for-service) changed to a combined system of capitation and fee-for-service for both groups. Our first hypothesis was that privately insured consumers had a higher increase in patient-initiated GP contact rates compared with socially insured consumers. Our second hypothesis was that socially insured consumers had a higher increase in physician-initiated contact rates. Data were used from electronic medical records from 32 GP-practices and 35336 consumers in 2005-2007. A difference-in-differences approach was applied to study the effect of changes in cost sharing and remuneration system on contact rates. Abolition of cost sharing led to a higher increase in patient-initiated utilisation for privately insured consumers in persons aged 65 and older. Introduction of fee-for-service for socially insured consumers led to a higher increase in physician-initiated utilisation. This was most apparent in persons aged 25 to 54. Differences in the trend in physician-initiated utilisation point to an effect of supplier-induced demand. Differences in patient-initiated utilisation indicate limited evidence for moral hazard.

  12. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  13. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  14. 黑龙江省国有林区职工家庭生计资本的实证分析%The Empirical Analysis of Worker Families' Likelihood Capital in the State-owned Forest Region of Heilongjiang Province

    Institute of Scientific and Technical Information of China (English)

    韩竺君; 冯孟诗; 和千琪

    2015-01-01

    According to the analysis of the 241 samples data from the database of livelihood survey of key national forest areas in 2013, the paper has found the characteristics of worker families' likelihood capital in the state-owned forest region of Heilongjiang province. This study proves that:adult labors in worker family have heavy burden to support their elderly kin; most workers are highly educated; worker households' income level is not high while their household income comes mainly from salary income with less other sources of income;worker families' social capital is insufficient.%运用2013年重点国有林区民生调查数据库中241个样本数据,通过描述性分析得出黑龙江省国有林区职工家庭生计资本的特征.研究结果表明:职工家庭成年劳动力赡养负担重;职工受教育程度较高;职工家庭收入主要来源于工资性收入,其他收入来源渠道少,家庭收入水平不高;职工家庭社会资本不强.

  15. Discussion on climate oscillations: CMIP5 general circulation models versus a semi empirical harmonic model based on astronomical cycles

    CERN Document Server

    Scafetta, Nicola

    2013-01-01

    Power spectra of global surface temperature (GST) records reveal major periodicities at about 9.1, 10-11, 19-22 and 59-62 years. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC (2013), are analyzed and found not able to reconstruct this variability. From 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 K/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. The climate sensitivity to CO2 doubling should be reduced by half, e.g. from the IPCC-2007 2.0-4.5 K range to 1.0-2.3 K with 1.5 C median. Also modern paleoclimatic temperature reconstructions yield the same conclusion. The observed natural oscillations could be driven by astronomical forcings. Herein I propose a semi empirical climate model made of six specific astronomical oscillations as constructors of the natural climate variability spanning ...

  16. Testing an astronomically-based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models

    CERN Document Server

    Scafetta, Nicola

    2012-01-01

    We compare the performance of a recently proposed empirical climate model based on astronomical harmonics against all available general circulation climate models (GCM) used by the IPCC (2007) to interpret the 20th century global surface temperature. The proposed model assumes that the climate is resonating with, or synchronized to a set of natural harmonics that have been associated to the solar system planetary motion, mostly determined by Jupiter and Saturn. We show that the GCMs fail to reproduce the major decadal and multidecadal oscillations found in the global surface temperature record from 1850 to 2011. On the contrary, the proposed harmonic model is found to well reconstruct the observed climate oscillations from 1850 to 2011, and it is able to forecast the climate oscillations from 1950 to 2011 using the data covering the period 1850-1950, and vice versa. The 9.1-year cycle is shown to be likely related to a decadal Soli/Lunar tidal oscillation, while the 10-10.5, 20-21 and 60-62 year cycles are sy...

  17. Augmented Likelihood Image Reconstruction.

    Science.gov (United States)

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  18. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum like...

  19. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case. The s...

  20. 广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models

    Institute of Scientific and Technical Information of China (English)

    张戈; 吴黎军

    2013-01-01

    研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.

  1. The effects of ionic strength and organic matter on virus inactivation at low temperatures: general likelihood uncertainty estimation (GLUE) as an alternative to least-squares parameter optimization for the fitting of virus inactivation models

    Science.gov (United States)

    Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin

    2017-06-01

    This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.

  2. A hybrid model for PM₂.₅ forecasting based on ensemble empirical mode decomposition and a general regression neural network.

    Science.gov (United States)

    Zhou, Qingping; Jiang, Haiyan; Wang, Jianzhou; Zhou, Jianling

    2014-10-15

    Exposure to high concentrations of fine particulate matter (PM₂.₅) can cause serious health problems because PM₂.₅ contains microscopic solid or liquid droplets that are sufficiently small to be ingested deep into human lungs. Thus, daily prediction of PM₂.₅ levels is notably important for regulatory plans that inform the public and restrict social activities in advance when harmful episodes are foreseen. A hybrid EEMD-GRNN (ensemble empirical mode decomposition-general regression neural network) model based on data preprocessing and analysis is firstly proposed in this paper for one-day-ahead prediction of PM₂.₅ concentrations. The EEMD part is utilized to decompose original PM₂.₅ data into several intrinsic mode functions (IMFs), while the GRNN part is used for the prediction of each IMF. The hybrid EEMD-GRNN model is trained using input variables obtained from principal component regression (PCR) model to remove redundancy. These input variables accurately and succinctly reflect the relationships between PM₂.₅ and both air quality and meteorological data. The model is trained with data from January 1 to November 1, 2013 and is validated with data from November 2 to November 21, 2013 in Xi'an Province, China. The experimental results show that the developed hybrid EEMD-GRNN model outperforms a single GRNN model without EEMD, a multiple linear regression (MLR) model, a PCR model, and a traditional autoregressive integrated moving average (ARIMA) model. The hybrid model with fast and accurate results can be used to develop rapid air quality warning systems.

  3. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    Science.gov (United States)

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.

  4. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  5. Adaptive Beam Space Transformation Generalized Likelihood Ratio Test Algorithm Using Acoustic Vector Sensor Array%声矢量阵自适应波束域广义似然比检测算法

    Institute of Scientific and Technical Information of China (English)

    梁国龙; 陶凯; 范展

    2015-01-01

    In order to resolve the detection problem of passive remote weak targets under the background of strong interfer-ence ,a detection algorithm based on adaptive beam space transformation using acoustic vector sensor array is proposed .Firstly ,by designing a beamspace matrix which covers the observed sector and rejects the interference signals out-of-sector ,the array output da-ta are transformed to beamspace .Then ,the generalized likelihood ratio test is derived in beamspace .The simulation results show that the method can detect the passive weak targets efficiently under the background of strong interference ,and provide the constant false alarm rate (CFAR ) detection .%为了解决水下强干扰背景下的远程弱目标被动探测问题,基于声矢量阵,本文提出了一种自适应波束域的检测算法。该算法首先对阵列接收数据进行波束域变换,令通带覆盖整个观测扇面,并自适应地抑制扇面外的强干扰信号;然后在波束域进行广义似然比检测。仿真结果表明,该算法能在强干扰背景下实现对远程弱目标的检测,并且具有恒虚警率特性。

  6. 基于广义似然比法的化工非线性动态过程过失误差侦破%Gross Errors Detection for Nonlinear Dynamic Chemical Process Based on Generalized Likelihood Ratios

    Institute of Scientific and Technical Information of China (English)

    王莉; 金思毅; 黄兆杰

    2013-01-01

    广义似然比法(GLR)是一种有效适用于线性稳态化工过程的过失误差侦破方法.通过将动态化工数据协调模型中的微分约束和代数约束转化为矩阵形式和非线性约束线性化方法,成功将GLR应用到连续搅拌釜(CSTR)非线性动态系统中,同时计算了GLR在该系统中的过失误差侦破性能.统计结果表明,GLR的过失误差侦破率与过失误差大小和窗口长度有关:侦破率随过失误差增大而增大,随窗口长度增大而增大.%Generalized likelihood ratios (GLR) is an effective gross errors detection method for linear steady data reconciliation.In the paper,the differential constraints and algebraic constraints of dynamic data reconciliation model were transformed into the form of matrix,and the nonlinear constraints were linearized.Based on the two methods,GLR was successfully applied to a continuous stirred tank reactor (CSTR) system.The performance of gross errors detection of GLR in the nonlinear dynamic system was also calculated.Statistic results show that gross error detection rate relates to the size of gross error and the length of moving window.With the increase of gross error,the detection rate is improved; with the increase of length of moving window,the detection rate is also improved.

  7. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  8. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  9. The Generalized Empirical Interpolation Method: Stability Theory on Hilbert Spaces with an Application to the Stokes Equation

    Science.gov (United States)

    2014-11-19

    treatment of nonaffine and nonlinear partial differential equations ., ESAIM, Math. Model. Numer. Anal. 41(3) (2007) 575–605. [8] Y. Maday, N. Nguyen, A...and Numerics of Partial Differential Equations , Vol. 4 of Springer INdAM Series, Springer Milan, 2013, pp. 221–235. [2] M. Barrault, Y. Maday, N...Nguyen, A. Patera, An empirical interpolation method: Application to efficient reduced-basis discretization of partial differential equations ., C. R

  10. General Khalid Bin Waleed: Understanding the 7th Century Campaign against Sassanid Persian Empire from the Perspective of Operational Art

    Science.gov (United States)

    2012-12-06

    practice of planning and conducting campaigns and major operations aimed at accomplishing strategic and operational objectives in a given theatre of...English renaissances , or rebirths of classical learning, in the 15th and 16th centuries. Britannica Encyclopedia: http://www.britannica.com/EBchecked...reminded. While many thousands of books have been written since the Renaissance on the subject of the Roman Empire, yet the number of standard works in

  11. Regions of constrained maximum likelihood parameter identifiability

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1975-01-01

    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  12. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  13. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...

  14. Semi-empiricial Likelihood Confidence Intervals for the Differences of Two Populations Based on Fractional Imputation

    Institute of Scientific and Technical Information of China (English)

    BAI YUN-XIA; QIN YONG-SONG; WANG LI-RONG; LI LING

    2009-01-01

    Suppose that there axe two populations x and y with missing data on both of them, where x has a distribution function F(.) which is unknown and y has form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  17. An Algorithm for Detecting the Onset of Muscle Contraction Based on Generalized Likelihood Ratio Test%采用广义似然比检测的肌肉收缩起始时刻判断算法

    Institute of Scientific and Technical Information of China (English)

    徐琦; 程俊银; 周慧; 杨磊

    2012-01-01

    The surface electromyography (sEMG) of stump in the amputee is often applied to control the action of myoelectric prosthesis. According to the sEMG signals with low Signal to Noise Ratio (SNR) recorded from the stump muscle,a generalized likelihood ratio (GLR) method was proposed to detect the onset of muscle contraction,where a decision threshold was related with the SNR of sEMG signals,an off-line simulation method was used to determine the relationship between them. For the simulated sEMG signals with a given SNR,the different thresholds were tested,the optimal threshold could be obtained when the detection accuracy was optimized. As a result,the fitted curve was achieved to describe the relationship of the SNR and the decision threshold. Then,the sEMG signals are analyzed on-line by the GLR test for the onset detection of muscle contractions,while the decision threshold corresponding with the SNR was chosen based on the fitted curve. Compared with the classical algorithms,with the simulated sEMG traces,the error mean and standard deviation for estimating the muscle contraction onset were reduced at least 35% and 43% respectively; based on the real EMG signals,the error mean and standard deviation of the onset estimate were separately not less than 29% and 23%. Therefore,the proposed algorithm based on GLR test for the onset detection of muscle contraction was more accurate than other methods,while the SNR of sEMG signals was low.%肌电假肢利用残肢残存肌肉的肌电信号实行对假肢的控制.对于低信噪比的残肢表面肌电,本研究采用广义似然比检测方法判断肌肉收缩起始时刻,其中判别阈值与肌电信号信噪比有关.针对不同信噪比的模拟肌电信号,采用离线仿真方法得到肌肉收缩起始时刻检测误差最小的判别阈值,得到信噪比-经验阈值拟合曲线,确定信噪比与阈值的对应关系;根据肌电信噪比由阈值拟合曲线得到判别阈值,采用似然比检测算法

  18. Groups, information theory, and Einstein's likelihood principle

    Science.gov (United States)

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  19. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  20. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  1. Introduction to general and generalized linear models

    CERN Document Server

    Madsen, Henrik

    2010-01-01

    IntroductionExamples of types of data Motivating examples A first view on the modelsThe Likelihood PrincipleIntroduction Point estimation theory The likelihood function The score function The information matrix Alternative parameterizations of the likelihood The maximum likelihood estimate (MLE) Distribution of the ML estimator Generalized loss-function and deviance Quadratic approximation of the log-likelihood Likelihood ratio tests Successive testing in hypothesis chains Dealing with nuisance parameters General Linear ModelsIntroduction The multivariate normal distribution General linear mod

  2. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  3. Forecasting New Product Sales from Likelihood of Purchase Ratings

    OpenAIRE

    William J. Infosino

    1986-01-01

    This paper compares consumer likelihood of purchase ratings for a proposed new product to their actual purchase behavior after the product was introduced. The ratings were obtained from a mail survey a few weeks before the product was introduced. The analysis leads to a model for forecasting new product sales. The model is supported by both empirical evidence and a reasonable theoretical foundation. In addition to calibrating the relationship between questionnaire ratings and actual purchases...

  4. A cross-national investigation into the marketing department's influence within the firm : Towards initial empirical generalizations

    NARCIS (Netherlands)

    Verhoef, P.C.; Leeflang, P.S.H.; Reiner, J.; Natter, M.; Grinstein, A.; Baker, B.; Gustafson, A.; Saunders, J.

    2011-01-01

    This study of the influence of the marketing department (MD), as well as its relationship with firm performance, includes seven industrialized countries and aims to generalize the conceptual model presented by Verhoef and Leeflang (2009). This investigation considers the antecedents of perceived MD

  5. A cross-national investigation into the marketing department's influence within the firm : Towards initial empirical generalizations

    NARCIS (Netherlands)

    Verhoef, P.C.; Leeflang, P.S.H.; Reiner, J.; Natter, M.; Grinstein, A.; Baker, B.; Gustafson, A.; Saunders, J.

    2011-01-01

    This study of the influence of the marketing department (MD), as well as its relationship with firm performance, includes seven industrialized countries and aims to generalize the conceptual model presented by Verhoef and Leeflang (2009). This investigation considers the antecedents of perceived MD

  6. Rate of strong consistency of the maximum quasi-likelihood estimator in quasi-likelihood nonlinear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.

  7. THE IMPLICATIONS OF GROSS FIXED CAPITAL AND UNEMPLOYMENT RATE ON GENERAL GOVERNMENT DEFICIT. EMPIRICAL STUDY AT THE EUROPEAN LEVEL

    Directory of Open Access Journals (Sweden)

    Mihai Carp

    2010-12-01

    Full Text Available In this paper we have evaluated the influence of the modification of public investment level and unemployment rate on the general government deficit at the European Union level. We have created a regression model that shows that a sustained and increased investment policy and the reduction of unemployment rate have a favorable effect on the objective of minimizing the budget deficit. In the last years European Union’s countries had to face a difficult problem concerning fiscal policy. They had to make public investments to stimulate economic growth and, in the same time, they had to meet the convergence criteria’s of public deficit. On the other hand, EU has to deal with a higher rate of unemployment. Through our model we try to see how European Union countries should implement their political strategies on unemployment and investment with the main objective of reducing the general government deficit.

  8. THE IMPLICATIONS OF GROSS FIXED CAPITAL AND UNEMPLOYMENT RATE ON GENERAL GOVERNMENT DEFICIT. EMPIRICAL STUDY AT THE EUROPEAN LEVEL

    OpenAIRE

    2010-01-01

    In this paper we have evaluated the influence of the modification of public investment level and unemployment rate on the general government deficit at the European Union level. We have created a regression model that shows that a sustained and increased investment policy and the reduction of unemployment rate have a favorable effect on the objective of minimizing the budget deficit. In the last years European Union’s countries had to face a difficult problem concerning fiscal policy. They ha...

  9. THE IMPLICATIONS OF GROSS FIXED CAPITAL AND UNEMPLOYMENT RATE GENERAL GOVERNMENT DEFICIT. EMPIRICAL STUDY AT THE EUROPEAN LEVEL

    OpenAIRE

    2010-01-01

    In this paper we evaluate the influence of the modification of public investment level and unemployment rate on the general government deficit at the European Union level. We create a regression model that shows that a sustained and increased investment policy and the reduction of unemployment rate have a favorable effect on the objective of minimizing the budget deficit. In the last years European Union’s countries had to face a difficult problem concerning fiscal policy. They have to make p...

  10. Attitude toward Advertising in General and Attitude toward a Specific Type of Advertising – A First Empirical Approach

    Directory of Open Access Journals (Sweden)

    Dianoux Christian

    2014-03-01

    Full Text Available The paper examines based on international research the differences between results of studies focused on consumers’ attitude toward advertising. The aim of this paper is to show that it is possible to find situations where the influence of attitudes towards specific ads in general (ASG on attitudes toward advertising (Aad can be observed and also it is possible to find no influence of attitudes toward ads in general (AG on Aad. The paper shows that the problem comes from the definition of AG. The experiments described in this paper detect attitudinal differences toward advertising in general among studied nations depending on the type of advertising. The research encompasses respondents from three countries with different economic and cultural backgrounds (Germany, Ukraine and USA. The data were collected based on a quantitative survey and experiment among university students. The results show that the concept of AG is in some cases too broad. Differences between AG were confirmed between Ukraine and other countries. The respondents from Germany are according to AG more pessimistic and the respondents from the USA are more optimistic. This disparity was explained by a significant difference in Orthodox and Atheist religion compared to the other religions.

  11. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Y. [CancerCare Manitoba (Canada); University of Calgary (Canada); Sommerville, M.; Johnstone, C.D. [San Diego State University (United States); Gräfe, J.; Nygren, I.; Jacso, F. [Tom Baker Cancer Centre (Canada); Khan, R.; Villareal-Barajas, J.E. [University of Calgary (Canada); Tom Baker Cancer Centre (Canada); Tambasco, M. [University of Calgary (Canada); San Diego State University (United States)

    2014-08-15

    Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values.

  12. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  13. On approximate equivalence of the generalized least squares estimate and the maximum likelihood estimate in a growth curve model%生长曲线模型中最小二乘估计与极大似然估计的近似等价性

    Institute of Scientific and Technical Information of China (English)

    王理同

    2012-01-01

    在生长曲线模型中,参数矩阵的最小二乘估计为响应变量的线性函数,而极大似然估计为响应变量的非线性函数,所以极大似然估计的统计推断比较复杂.为了使它的统计推断简单点,一些学者考虑了极大似然估计与最小二乘估计的等价性.不幸的是极大似然估计与最小二乘估计的完全等价性不易满足.因此考虑它们的近似等价性,即考虑它们基于欧式范数标准下的模长之比.如果比值在任意给定的允许误差之内,就认为极大似然估计近似等价于最小二乘估计,从而简化极大似然估计的统计推断.%In a growth curve model,the generalized least squares estimator of the parameter matrix is a linear function of the response variables while its maximum likelihood estimator is nonlinear, so the statistical inference based on the maximum likelihood estimate might be more complicated. In order to make its statistical inference more easily analytical and tractable to obtain, some authors concern conditions under which the maximum likelihood estimator is completely equivalent to the generalized least squares estimator. Unfortunately, such conditions are very parsimonious. Therefore, an asymptotical equivalence between them is suggested, that is, consider the ratio of two covariance matrices concerned based on Euclidean norm. It is believed that the maximum likelihood estimator approximates the generalized least squares estimator if the ratio between them is limited to the permitted errors, and then the statistical inference of the maximum likelihood estimator is simplified.

  14. Trends in antimicrobial resistance and empiric antibiotic therapy of bloodstream infections at a general hospital in Mid-Norway: a prospective observational study.

    Science.gov (United States)

    Mehl, Arne; Åsvold, Bjørn Olav; Kümmel, Angela; Lydersen, Stian; Paulsen, Julie; Haugan, Ingvild; Solligård, Erik; Damås, Jan Kristian; Harthug, Stig; Edna, Tom-Harald

    2017-02-02

    The occurrence of bloodstream infection (BSI) and antimicrobial resistance have been increasing in many countries. We studied trends in antimicrobial resistance and empiric antibiotic therapy at a medium-sized general hospital in Mid-Norway. Between 2002 and 2013, 1995 prospectively recorded episodes of BSI in 1719 patients aged 16-99 years were included. We analyzed the antimicrobial non-susceptibility according to place of acquisition, site of infection, microbe group, and time period. There were 934 community-acquired (CA), 787 health care-associated (HCA) and 274 hospital-acquired (HA) BSIs. The urinary tract was the most common site of infection. Escherichia coli was the most frequently isolated infective agent in all three places of acquisition. Second in frequency was Streptococcus pneumoniae in CA and Staphylococcus aureus in both HCA and HA. Of the BSI microbes, 3.5% were non-susceptible to the antimicrobial regimen recommended by the National Professional Guidelines for Use of Antibiotics in Hospitals, consisting of penicillin, gentamicin, and metronidazole (PGM). In contrast, 17.8% of the BSI microbes were non-susceptible to cefotaxime and 27.8% were non-susceptible to ceftazidime. Antimicrobial non-susceptibility differed by place of acquisition. For the PGM regimen, the proportions of non-susceptibility were 1.4% in CA, 4.8% in HCA, and 6.9% in HA-BSI (p resistant microbes. We also observed increasing numbers of bacteria with acquired resistance, particularly E. coli producing ESBL or possessing gentamicin resistance, and these occurred predominantly in CA- and HCA-BSI. Generally, antimicrobial resistance was a far smaller problem in our BSI cohort than is reported from countries outside Scandinavia. In our cohort, appropriate empiric antibiotic therapy could be achieved to a larger extent by replacing second- and third-generation cephalosporins with penicillin-gentamicin or piperacillin-tazobactam.

  15. Using generalized additive modeling to empirically identify thresholds within the ITERS in relation to toddlers' cognitive development.

    Science.gov (United States)

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-04-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints, in the child care quality measures used in these benchmarking efforts that differentiate between different levels of children's cognitive functioning. To date, research has provided little guidance to policymakers as to where these thresholds should be set. Using the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) data set, this study explores the use of generalized additive modeling (GAM) as a method of identifying thresholds on the Infant/Toddler Environment Rating Scale (ITERS) in relation to toddlers' performance on the Mental Development subscale of the Bayley Scales of Infant Development (the Bayley Mental Development Scale Short Form-Research Edition, or BMDSF-R). The present findings suggest that simple linear models do not always correctly depict the relationships between ITERS scores and BMDSF-R scores and that GAM-derived thresholds were more effective at differentiating among children's performance levels on the BMDSF-R. Additionally, the present findings suggest that there is a minimum threshold on the ITERS that must be exceeded before significant improvements in children's cognitive development can be expected. There may also be a ceiling threshold on the ITERS, such that beyond a certain level, only marginal increases in children's BMDSF-R scores are observed.

  16. Declarativity and efficiency in providing services of general economic interest. Empirical study regarding the relation between heating costs and budget constraints

    Directory of Open Access Journals (Sweden)

    Dumitru Miron

    2013-06-01

    Full Text Available Defined by each country separately, according to real options, circumstances and traditions, the services of general economic interest have an objective purpose in ensuring protection and security for population. The services of general economic interest involve both public and economic services and show characteristics of both fields, reflecting the capabilities of communities to organize, regulate and provide them. Considering the accessibility to the essential service of general economic interest of providing household heating, as an undeniable condition of consumer protection, an analysis has been made in this field, with reference to the concrete manner of providing these services. The goal of this endeavor was to emphasize the actual conditionalities induced by the budget constraints of households while ensuring the universality of the access to the essential heating service. The empirical study is based on a survey of 55 households in sector 2 of Bucharest that have access to gas heating systems, while they have different revenues and equipments. The processing of the gathered data allowed the procurement of certain indicators that explain how household revenues determine the access to the heating services and how the deficiencies of the insurance system of these services deepen the social polarization and increase the weightings of those living at the limit of subsistence.

  17. Section 9: Ground Water - Likelihood of Release

    Science.gov (United States)

    HRS training. the ground water pathway likelihood of release factor category reflects the likelihood that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.

  18. Maximum likelihood estimation for life distributions with competing failure modes

    Science.gov (United States)

    Sidik, S. M.

    1979-01-01

    The general model for the competing failure modes assuming that location parameters for each mode are expressible as linear functions of the stress variables and the failure modes act independently is presented. The general form of the likelihood function and the likelihood equations are derived for the extreme value distributions, and solving these equations using nonlinear least squares techniques provides an estimate of the asymptotic covariance matrix of the estimators. Monte-Carlo results indicate that, under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slightly biased, and the asymptotic covariances are rapidly approached.

  19. Workshop on Likelihoods for the LHC Searches

    CERN Document Server

    2013-01-01

    The primary goal of this 3‐day workshop is to educate the LHC community about the scientific utility of likelihoods. We shall do so by describing and discussing several real‐world examples of the use of likelihoods, including a one‐day in‐depth examination of likelihoods in the Higgs boson studies by ATLAS and CMS.

  20. Parametric likelihood inference for interval censored competing risks data.

    Science.gov (United States)

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  1. Association of industry sponsorship and positive outcome in randomised controlled trials in general and abdominal surgery: protocol for a systematic review and empirical study.

    Science.gov (United States)

    Probst, Pascal; Grummich, Kathrin; Ulrich, Alexis; Büchler, Markus W; Knebel, Phillip; Diener, Markus K

    2014-11-27

    Industry sponsorship has been identified as a factor correlating with positive research findings in several fields of medical science. To date, the influence of industry sponsorship in general and abdominal surgery has not been fully studied. This protocol describes the rationale and planned conduct of a systematic review to determine the association between industry sponsorship and positive outcome in randomised controlled trials in general and abdominal surgery. A literature search in the Cochrane Library, MEDLINE and EMBASE and additional hand searches in relevant citations will be conducted. In order to cover all relevant areas of general and abdominal surgery, a new literature search strategy called multi-PICO search strategy (MPSS) has been developed. No language restriction will be applied. The search will be limited to publications between January 1985 and July 2014. Information on funding source, outcome, study characteristics and methodological quality will be extracted.The association between industry sponsorship and positive outcome will be tested by a chi-squared test. A multivariate logistic regression analysis will be performed to control for possible confounders, such as number of study centres, multinational trials, methodological quality, journal impact factor and sample size. This study was designed to clarify whether industry-sponsored trials report more positive outcomes than non-industry trials. It will be the first study to evaluate this topic in general and abdominal surgery. The findings of this study will enable surgical societies, in particular, to give advice about cooperation with the industry and disclosure of funding source based on empirical evidence. PROSPERO CRD42014010802.

  2. An Empirical Study of Generalized Service Product Design and Prosumption%广义服务产品设计与生产实证研究

    Institute of Scientific and Technical Information of China (English)

    吕贵兴; 李光芹

    2012-01-01

    Designing service products is the foundation to improve the quality of service.Through the analysis of the generalized service product model,the paper finds the underlying dynamic process of service production,analyzing and constructing service production and marketing model based on the service production model of generalized service products from the perspective of interface management,and empirically explaining the service production and marketing model by using the investigation data of consumer cognitive and consumer behavior.%服务产品设计是提高服务质量的基础。对广义服务产品模型的分析发现了其隐含的服务生产的动态过程。借助服务生产模型,构建了基于广义服务产品的服务产消模型;利用消费者认知与消费行为调研数据,实证解释了服务产消模型。

  3. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  4. Exclusion probabilities and likelihood ratios with applications to kinship problems.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2014-05-01

    In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.

  5. Corporate governance effect on financial distress likelihood: Evidence from Spain

    Directory of Open Access Journals (Sweden)

    Montserrat Manzaneque

    2016-01-01

    Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.

  6. Heteroscedastic one-factor models and marginal maximum likelihood estimation

    NARCIS (Netherlands)

    Hessen, D.J.; Dolan, C.V.

    2009-01-01

    In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati

  7. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    Science.gov (United States)

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  8. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  9. Recent developments in maximum likelihood estimation of MTMM models for categorical data

    Directory of Open Access Journals (Sweden)

    Minjeong eJeon

    2014-04-01

    Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.

  10. Approximated maximum likelihood estimation in multifractal random walks

    CERN Document Server

    Løvsletten, Ola

    2011-01-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry et al., Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  11. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  12. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  13. Penalized maximum likelihood estimation and variable selection in geostatistics

    CERN Document Server

    Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919

    2012-01-01

    We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...

  14. Influence functions of trimmed likelihood estimators for lifetime experiments

    OpenAIRE

    2015-01-01

    We provide a general approach for deriving the influence function for trimmed likelihood estimators using the implicit function theorem. The approach is applied to lifetime models with exponential or lognormal distributions possessing a linear or nonlinear link function. A side result is that the functional form of the trimmed estimator for location and linear regression used by Bednarski and Clarke (1993, 2002) and Bednarski et al. (2010) is not generally always the correct fu...

  15. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  16. Conditional Likelihood Estimators for Hidden Markov Models and Stochastic Volatility Models

    OpenAIRE

    Genon-Catalot, Valentine; Jeantheau, Thierry; Laredo, Catherine

    2003-01-01

    ABSTRACT. This paper develops a new contrast process for parametric inference of general hidden Markov models, when the hidden chain has a non-compact state space. This contrast is based on the conditional likelihood approach, often used for ARCH-type models. We prove the strong consistency of the conditional likelihood estimators under appropriate conditions. The method is applied to the Kalman filter (for which this contrast and the exact likelihood lead to asymptotically equivalent estimat...

  17. Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes

    DEFF Research Database (Denmark)

    Zimmermann, Ralf

    2010-01-01

    The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood......: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training...

  18. The null distribution of likelihood-ratio statistics in the conditional-logistic linkage model.

    Science.gov (United States)

    Song, Yeunjoo E; Elston, Robert C

    2013-01-01

    Olson's conditional-logistic model retains the nice property of the LOD score formulation and has advantages over other methods that make it an appropriate choice for complex trait linkage mapping. However, the asymptotic distribution of the conditional-logistic likelihood-ratio (CL-LR) statistic with genetic constraints on the model parameters is unknown for some analysis models, even in the case of samples comprising only independent sib pairs. We derive approximations to the asymptotic null distributions of the CL-LR statistics and compare them with the empirical null distributions by simulation using independent affected sib pairs. Generally, the empirical null distributions of the CL-LR statistics match well the known or approximated asymptotic distributions for all analysis models considered except for the covariate model with a minimum-adjusted binary covariate. This work will provide useful guidelines for linkage analysis of real data sets for the genetic analysis of complex traits, thereby contributing to the identification of genes for disease traits.

  19. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  20. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    . The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock......We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...

  1. Precise Estimation of Cosmological Parameters Using a More Accurate Likelihood Function

    Science.gov (United States)

    Sato, Masanori; Ichiki, Kiyotomo; Takeuchi, Tsutomu T.

    2010-12-01

    The estimation of cosmological parameters from a given data set requires a construction of a likelihood function which, in general, has a complicated functional form. We adopt a Gaussian copula and constructed a copula likelihood function for the convergence power spectrum from a weak lensing survey. We show that the parameter estimation based on the Gaussian likelihood erroneously introduces a systematic shift in the confidence region, in particular, for a parameter of the dark energy equation of state w. Thus, the copula likelihood should be used in future cosmological observations.

  2. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  3. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  4. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2012-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally. The po

  5. Sampling variability in forensic likelihood-ratio computation: A simulation study

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond; Meuwly, Didier

    2015-01-01

    Recently, in the forensic biometric community, there is a growing interest to compute a metric called “likelihood- ratio” when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biomet- ric recognition system outputs a score and therefore a likelihood-ratio

  6. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  7. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  8. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  9. On an Extended Quasi-likelihood Estimation and a Diagnostic Test for Heteroscedasticity in the Generalized linear Models%广义线性模型中一类推广的拟似然估计与异方差性诊断检验

    Institute of Scientific and Technical Information of China (English)

    田茂再; 吴喜之

    2002-01-01

    本文考虑了随机设计情形下一类普通的异方差回归模型,在这个模型中,假定回归函数与方差函数之间的关系服从推广的广义非线性模型.该模型在实际中很常见,广义线性模型便是其特例.首先,我们导出了均值函数的局部加权拟似然估计;然后,用它来得到方差函数的估计,并且证明了这些估计有较好的性质;最后,建立了异方差检验统计量,文中的方法很吸引人.%A general heteroscedastic regression model is considered in a random design setting. In this model,the relationship between the regression function and the variance function is assumed to follow an extended generalized nonlinear model which is common in practice with the generalized linear models as its special cases. Locally-weighted-quasi-likelihood estimate for the mean function is derived and is then applied to obtain an estimate of the variance function. It is also demonstrated that such estimators are with good properties. A test for heteroscedasticity is established. The methodology employed in this article is attractive.

  10. Generalized Additive Models, Cubic Splines and Penalized Likelihood.

    Science.gov (United States)

    1987-05-22

    in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency

  11. Introductory statistical inference with the likelihood function

    CERN Document Server

    Rohde, Charles A

    2014-01-01

    This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians.  After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with secti...

  12. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  13. Maximum likelihood Jukes-Cantor triplets: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Hendy, Michael D; Snir, Sagi

    2006-03-01

    Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.

  14. Maximum Likelihood Under Response Biased Sampling\\ud

    OpenAIRE

    Chambers, Raymond; Dorfman, Alan; Wang, Suojin

    2003-01-01

    Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...

  15. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  16. Empirical evidence of a comprehensive model of school effectiveness : A multilevel study in mathematics in the 1st year of junior general education in the Netherlands

    NARCIS (Netherlands)

    de Jong, Robert; Westerhof, KJ; Kruiter, JH

    2004-01-01

    In the field of school effectiveness and school improvement, scholars as well as practitioners often complain about the absence of theory to guide their work. To fill this gap, Creemers (1994) developed a comprehensive model of educational effectiveness. In order to gain empirical evidence, we teste

  17. Empirical evidence of a comprehensive model of school effectiveness : A multilevel study in mathematics in the 1st year of junior general education in the Netherlands

    NARCIS (Netherlands)

    de Jong, Robert; Westerhof, KJ; Kruiter, JH

    2004-01-01

    In the field of school effectiveness and school improvement, scholars as well as practitioners often complain about the absence of theory to guide their work. To fill this gap, Creemers (1994) developed a comprehensive model of educational effectiveness. In order to gain empirical evidence, we teste

  18. Goldstein-Kac telegraph processes with random speeds: Path probabilities, likelihoods, and reported Lévy flights

    Science.gov (United States)

    Sim, Aaron; Liepe, Juliane; Stumpf, Michael P. H.

    2015-04-01

    The Goldstein-Kac telegraph process describes the one-dimensional motion of particles with constant speed undergoing random changes in direction. Despite its resemblance to numerous real-world phenomena, the singular nature of the resultant spatial distribution of each particle precludes the possibility of any a posteriori empirical validation of this random-walk model from data. Here we show that by simply allowing for random speeds, the ballistic terms are regularized and that the diffusion component can be well-approximated via the unscented transform. The result is a computationally efficient yet robust evaluation of the full particle path probabilities and, hence, the parameter likelihoods of this generalized telegraph process. We demonstrate how a population diffusing under such a model can lead to non-Gaussian asymptotic spatial distributions, thereby mimicking the behavior of an ensemble of Lévy walkers.

  19. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    Science.gov (United States)

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  20. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  1. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  2. Maximum-likelihood fits to histograms for improved parameter estimation

    CERN Document Server

    Fowler, Joseph W

    2013-01-01

    Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  3. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    Science.gov (United States)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  4. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  5. A model independent safeguard for unbinned Profile Likelihood

    CERN Document Server

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny

    2016-01-01

    We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models. We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.

  6. Trends in the Publication of Empirical Economics

    OpenAIRE

    David Figlio

    1994-01-01

    This paper documents recent trends in the publication of empirical articles in general-interest economics journals. Three measures of journal quality are estimated. The author finds substantial differences in publication rates of empirical articles among top tier and second-tier journals, and shows that the empirical percentages among general-interest journals has been converging of late. He offers potential explanations for the negative relationship between measured journal quality and empir...

  7. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results......To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  8. Likelihood alarm displays. [for human operator

    Science.gov (United States)

    Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.

    1988-01-01

    In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.

  9. A quantum framework for likelihood ratios

    CERN Document Server

    Bond, Rachael L; Ormerod, Thomas C

    2015-01-01

    The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...

  10. CORA: Emission Line Fitting with Maximum Likelihood

    Science.gov (United States)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.

  11. Asymptotic properties of maximum likelihood estimators in models with multiple change points

    CERN Document Server

    He, Heping; 10.3150/09-BEJ232

    2011-01-01

    Models with multiple change points are used in many fields; however, the theoretical properties of maximum likelihood estimators of such models have received relatively little attention. The goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within-segment distributions is also derived. Since the approach used in single change-point models is not easily extended to multiple change-point models, these results require the introduction of those tools for analyzing the likelihood function in a multiple change-point model.

  12. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  13. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  14. Synthesizing Regression Results: A Factored Likelihood Method

    Science.gov (United States)

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  15. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  16. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  17. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  18. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  19. Secante hiperbólica generalizada y un método de estimación de sus parámetros: máxima verosimilitud modificada Generalized Secant Hyperbolic and a Method of Estimate of its Parameters: Maximum Likelihood Modified

    Directory of Open Access Journals (Sweden)

    Luis Alejandro Másmela Caita

    2013-11-01

    Full Text Available Diversas distribuciones generalizadas se desarrollan en la literatura estadística, entre ellas se encuentra la distribución Secante Hiperbólica Generalizada (SHG. En este documento se presenta un método alternativo para la estimación de los parámetros poblacionales de la SHG, llamado Máxima Verosimilitud Modificada (MVM. Asumiendo algunas expresiones alternas que difieren con el trabajo de Vaughan en el 2002 y basándose en el mismo conjunto de datos de la fuente original. Se implementa computacionalmente el método transformado de MVM, permitiendo observar unas buenas aproximaciones de los valores de los parámetros de localización y escala, presentados por Vaughan en su artículo. Con esto se pretende que en la práctica se cuente con una metodología diferente para estimar.Different generalized distributions are developed in the statistical literature, among them it is the generalized secant hyperbolic distribution (SHG. This paper presents an alternative method for estimation the population parameters of the SHG, called Modified Maximum Likelihood (MVM. Assuming some alternate expressions that are different from Vaughan´s work in 2002, and based on the same set of data from the original source. It is implemented, the transformed method MVM is implemented computationally, it allows us to observe good approximations of the exact values of the parameters of location and scale, presented by Vaughan in his article. The aim is that in the practice you can use a different methodology to estimate.

  20. The emotion of compassion and the likelihood of its expression in nursing practice.

    Science.gov (United States)

    Newham, Roger Alan

    2016-12-16

    Philosophical and empirical work on the nature of the emotions is extensive, and there are many theories of emotions. However, all agree that emotions are not knee jerk reactions to stimuli and are open to rational assessment or warrant. This paper's focus is on the condition or conditions for compassion as an emotion and the likelihood that it or they can be met in nursing practice. Thus, it is attempting to keep, as far as possible, compassion as an emotion separate from both moral norms and professional norms. This is because empirical or causal conditions that can make experiencing and acting out of compassion difficult seem especially relevant in nursing practice. I consider how theories of emotion in general and of compassion in particular are somewhat contested, but all recent accounts agree that emotions are not totally immune to reason. Then, using accounts of constitutive conditions of the emotion of compassion, I will show how they are often likely to be quite fragile or unstable in practice and particularly so within much nursing practice. In addition, some of the conditions for compassion will be shown to be problematic for nursing practice. It is difficult to keep ideas of compassion separate from morality, and this connection is noticeable in the claims made of compassion for nursing and so I will briefly highlight one such connection that of the need for normative theory to give an account of the value that emotions such as compassion presume and that compassionate motivation is separate from moral motivation and may conflict with it. The fragility or instability of the emotion of compassion in practice has implications for both what can be expected and what should be expected of compassion; at least if what is wanted is a realist rather than idealist account of "should."

  1. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    Science.gov (United States)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  2. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented, s......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  3. Empire Redux

    DEFF Research Database (Denmark)

    Mercau, Ezequiel

    The Falklands War was shrouded in symbolism and permeated with imperial rhetoric, imagery and memories, bringing to the fore divergent conceptions of Britishness, kinship and belonging. The current dispute, moreover, is frequently framed in terms of national identity, and accusations of imperialism...... and neo-colonialism persist. Thus in dealing with the conflict, historians and commentators alike have often made references to imperial legacies, yet this has rarely been afforded much careful analysis. Views on this matter continue to be entrenched, either presenting the war as a throwback to nineteenth...... from the legacies of empire. Taking decolonization as a starting point, this thesis demonstrates how the idea of a ‘British world’ gained a new lease of life vis-à-vis the Falklands, as the Islanders adopted the rhetorical mantle of ‘abandoned Britons’. Yet this new momentum was partial and fractured...

  4. Empire Redux

    DEFF Research Database (Denmark)

    Mercau, Ezequiel

    from the legacies of empire. Taking decolonization as a starting point, this thesis demonstrates how the idea of a ‘British world’ gained a new lease of life vis-à-vis the Falklands, as the Islanders adopted the rhetorical mantle of ‘abandoned Britons’. Yet this new momentum was partial and fractured......The Falklands War was shrouded in symbolism and permeated with imperial rhetoric, imagery and memories, bringing to the fore divergent conceptions of Britishness, kinship and belonging. The current dispute, moreover, is frequently framed in terms of national identity, and accusations of imperialism......, as evinced by the developments triggered by the Argentine invasion in 1982. Despite the apparent firmness of the British government’s commitment to the Islands, cracks and fissures over the meaning of Britishness were simultaneously magnified. The perceived temporal dislocation of defending loyal Britons...

  5. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  6. Composite likelihood method for inferring local pedigrees

    Science.gov (United States)

    Nielsen, Rasmus

    2017-01-01

    Pedigrees contain information about the genealogical relationships among individuals and are of fundamental importance in many areas of genetic studies. However, pedigrees are often unknown and must be inferred from genetic data. Despite the importance of pedigree inference, existing methods are limited to inferring only close relationships or analyzing a small number of individuals or loci. We present a simulated annealing method for estimating pedigrees in large samples of otherwise seemingly unrelated individuals using genome-wide SNP data. The method supports complex pedigree structures such as polygamous families, multi-generational families, and pedigrees in which many of the member individuals are missing. Computational speed is greatly enhanced by the use of a composite likelihood function which approximates the full likelihood. We validate our method on simulated data and show that it can infer distant relatives more accurately than existing methods. Furthermore, we illustrate the utility of the method on a sample of Greenlandic Inuit. PMID:28827797

  7. Factors Associated with Young Adults’ Pregnancy Likelihood

    Science.gov (United States)

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  8. Lessons about likelihood functions from nuclear physics

    CERN Document Server

    Hanson, Kenneth M

    2007-01-01

    Least-squares data analysis is based on the assumption that the normal (Gaussian) distribution appropriately characterizes the likelihood, that is, the conditional probability of each measurement d, given a measured quantity y, p(d | y). On the other hand, there is ample evidence in nuclear physics of significant disagreements among measurements, which are inconsistent with the normal distribution, given their stated uncertainties. In this study the histories of 99 measurements of the lifetimes of five elementary particles are examined to determine what can be inferred about the distribution of their values relative to their stated uncertainties. Taken as a whole, the variations in the data are somewhat larger than their quoted uncertainties would indicate. These data strongly support using a Student t distribution for the likelihood function instead of a normal. The most probable value for the order of the t distribution is 2.6 +/- 0.9. It is shown that analyses based on long-tailed t-distribution likelihood...

  9. Maximum likelihood continuity mapping for fraud detection

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  10. Likelihood methods and classical burster repetition

    CERN Document Server

    Graziani, C; Graziani, Carlo; Lamb, Donald Q

    1995-01-01

    We develop a likelihood methodology which can be used to search for evidence of burst repetition in the BATSE catalog, and to study the properties of the repetition signal. We use a simplified model of burst repetition in which a number N_{\\rm r} of sources which repeat a fixed number of times N_{\\rm rep} are superposed upon a number N_{\\rm nr} of non-repeating sources. The instrument exposure is explicitly taken into account. By computing the likelihood for the data, we construct a probability distribution in parameter space that may be used to infer the probability that a repetition signal is present, and to estimate the values of the repetition parameters. The likelihood function contains contributions from all the bursts, irrespective of the size of their positional errors --- the more uncertain a burst's position is, the less constraining is its contribution. Thus this approach makes maximal use of the data, and avoids the ambiguities of sample selection associated with data cuts on error circle size. We...

  11. Database likelihood ratios and familial DNA searching

    CERN Document Server

    Slooten, Klaas

    2012-01-01

    Familial Searching is the process of searching in a DNA database for relatives of a given individual. It is well known that in order to evaluate the genetic evidence in favour of a certain given form of relatedness between two individuals, one needs to calculate the appropriate likelihood ratio, which is in this context called a Kinship Index. Suppose that the database contains, for a given type of relative, at most one related individual. Given prior probabilities of being the relative for all persons in the database, we derive the likelihood ratio for each database member in favour of being that relative. This likelihood ratio takes all the Kinship Indices between target and members of the database into account. We also compute the corresponding posterior probabilities. We then discuss two ways of selecting a subset from the database that contains the relative with a known probability, or at least a useful lower bound thereof. We discuss the relation between these approaches and illustrate them with Familia...

  12. Realism And Empirical Evidence

    CERN Document Server

    Schmelzer, I

    1997-01-01

    We define realism using a slightly modified version of the EPR criterion of reality. This version is strong enough to show that relativity is incomplete. We show that this definition of realism is nonetheless compatible with the general principles of causality and canonical quantum theory as well as with experimental evidence in the (special and general) relativistic domain. We show that the realistic theories we present here, compared with the standard relativistic theories, have higher empirical content in the strong sense defined by Popper's methodology.

  13. β-empirical Bayes inference and model diagnosis of microarray data

    Directory of Open Access Journals (Sweden)

    Hossain Mollah Mohammad

    2012-06-01

    Full Text Available Abstract Background Microarray data enables the high-throughput survey of mRNA expression profiles at the genomic level; however, the data presents a challenging statistical problem because of the large number of transcripts with small sample sizes that are obtained. To reduce the dimensionality, various Bayesian or empirical Bayes hierarchical models have been developed. However, because of the complexity of the microarray data, no model can explain the data fully. It is generally difficult to scrutinize the irregular patterns of expression that are not expected by the usual statistical gene by gene models. Results As an extension of empirical Bayes (EB procedures, we have developed the β-empirical Bayes (β-EB approach based on a β-likelihood measure which can be regarded as an ’evidence-based’ weighted (quasi- likelihood inference. The weight of a transcript t is described as a power function of its likelihood, fβ(yt|θ. Genes with low likelihoods have unexpected expression patterns and low weights. By assigning low weights to outliers, the inference becomes robust. The value of β, which controls the balance between the robustness and efficiency, is selected by maximizing the predictive β0-likelihood by cross-validation. The proposed β-EB approach identified six significant (p−5 contaminated transcripts as differentially expressed (DE in normal/tumor tissues from the head and neck of cancer patients. These six genes were all confirmed to be related to cancer; they were not identified as DE genes by the classical EB approach. When applied to the eQTL analysis of Arabidopsis thaliana, the proposed β-EB approach identified some potential master regulators that were missed by the EB approach. Conclusions The simulation data and real gene expression data showed that the proposed β-EB method was robust against outliers. The distribution of the weights was used to scrutinize the irregular patterns of expression and diagnose the model

  14. Strong Consistency of Maximum Quasi-Likelihood Estimator in Quasi-Likelihood Nonlinear Models%拟似然非线性模型中最大拟似然估计的强相合性

    Institute of Scientific and Technical Information of China (English)

    夏天; 孔繁超

    2008-01-01

    This paper proposes some regularity conditions.On the basis of the proposed regularity conditions,we show the strong consistency of maximum quasi-likelihood estimation (MQLE)in quasi-likelihood nonlinear models (QLNM).Our results may he regarded as a further generalization of the relevant results in Ref.[4].

  15. A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood

    Directory of Open Access Journals (Sweden)

    Rickard Enström

    2017-05-01

    Full Text Available From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific ‘problem music’ like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals’ risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety.

  16. Likelihood of tree topologies with fossils and diversification rate estimation.

    Science.gov (United States)

    Didier, Gilles; Fau, Marine; Laurin, Michel

    2017-04-18

    Since the diversification process cannot be directly observed at the human scale, it has to be studied from the information available, namely the extant taxa and the fossil record. In this sense, phylogenetic trees including both extant taxa and fossils are the most complete representations of the diversification process that one can get. Such phylogenetic trees can be reconstructed from molecular and morphological data, to some extent. Among the temporal information of such phylogenetic trees, fossil ages are by far the most precisely known (divergence times are inferences calibrated mostly with fossils). We propose here a method to compute the likelihood of a phylogenetic tree with fossils in which the only considered time information is the fossil ages, and apply it to the estimation of the diversification rates from such data. Since it is required in our computation, we provide a method for determining the probability of a tree topology under the standard diversification model.Testing 21 our approach on simulated data shows that the maximum likelihood rate estimates from the phylogenetic tree topology and the fossil dates are almost as accurate as those obtained by taking into account all the data, including the divergence times. Moreover, they are substantially more accurate than the estimates obtained only from the exact divergence times (without taking into account the fossil record).We also provide an empirical example composed of 50 Permo-carboniferous eupelycosaur (early synapsid) taxa ranging in age from about 315 Ma (Late Carboniferous) to 270 Ma (shortly after the end of the Early Permian). Our analyses suggest a speciation (cladogenesis, or birth) rate of about 0.1 per lineage and per My, a marginally lower extinction rate, and a considerable hidden paleobiodiversity of early synapsids. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email

  17. Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.

    Science.gov (United States)

    Chor, Benny; Snir, Sagi

    2004-12-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.

  18. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  19. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  20. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  1. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  2. Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-09-03

    We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.

  3. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  4. Exact likelihood-free Markov chain Monte Carlo for elliptically contoured distributions.

    Science.gov (United States)

    Muchmore, Patrick; Marjoram, Paul

    2015-08-01

    Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased estimator of the likelihood can have a stationary distribution identical to that of a chain based on exact likelihood calculations. In this paper we develop such an estimator for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this estimator, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a likelihood based chain, but does not require explicit likelihood calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible.

  5. Small-sample likelihood inference in extreme-value regression models

    CERN Document Server

    Ferrari, Silvia L P

    2012-01-01

    We deal with a general class of extreme-value regression models introduced by Barreto- Souza and Vasconcellos (2011). Our goal is to derive an adjusted likelihood ratio statistic that is approximately distributed as \\c{hi}2 with a high degree of accuracy. Although the adjusted statistic requires more computational effort than its unadjusted counterpart, it is shown that the adjustment term has a simple compact form that can be easily implemented in standard statistical software. Further, we compare the finite sample performance of the three classical tests (likelihood ratio, Wald, and score), the gradient test that has been recently proposed by Terrell (2002), and the adjusted likelihood ratio test obtained in this paper. Our simulations favor the latter. Applications of our results are presented. Key words: Extreme-value regression; Gradient test; Gumbel distribution; Likelihood ratio test; Nonlinear models; Score test; Small-sample adjustments; Wald test.

  6. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-01-07

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  7. CMB Power Spectrum Likelihood with ILC

    CERN Document Server

    Dick, Jason; Delabrouille, Jacques

    2012-01-01

    We extend the ILC method in harmonic space to include the error in its CMB estimate. This allows parameter estimation routines to take into account the effect of the foregrounds as well as the errors in their subtraction in conjunction with the ILC method. Our method requires the use of a model of the foregrounds which we do not develop here. The reduction of the foreground level makes this method less sensitive to unaccounted for errors in the foreground model. Simulations are used to validate the calculations and approximations used in generating this likelihood function.

  8. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...... are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model...

  9. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    Yi DAI; Zhao-jun WANG; Chang-liang ZOU

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method.Sullivan and woodall pointed out the test statistic lrt (n1, n2) is approximately distributed as x2 (2) as the sample size n, n1 and n2 are very large, and the value of n1 = 2, 3,..., n- 2 and that of n2 = n- n1.So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained.In addition, the properties of the standardized likelihood ratio statistic slr(n1,n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i ≠ n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both.Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  10. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  11. LIKEDM: Likelihood calculator of dark matter detection

    Science.gov (United States)

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  12. Multiplicative earthquake likelihood models incorporating strain rates

    Science.gov (United States)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.

  13. CORA - emission line fitting with Maximum Likelihood

    Science.gov (United States)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  14. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  15. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  16. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  17. Uncovering Voter Preference Structures Using a Best-Worst Scaling Procedure: Method and Empirical Example in the British General Election of 2010

    DEFF Research Database (Denmark)

    Ormrod, Robert P.; Savigny, Heather

    the relative importance of each of the options. Using a Balanced Incomplete Block research design we reduce the number of comparisons to the number of media, thus reducing fatigue and complexity problems associated with the standard paired-preference scale from which BWS is developed. Scale variables can...... be calculated to conduct statistical procedures such as multiple regression and MANOVA. We demonstrate the utility of the method for analysing events in the political sphere using data collected from 282 voters immediately after the British General Election of 2010 on voter preferences regarding the relative......Best-Worst scaling (BWS) is a method that can provide insights into the preference structures of voters. By asking voters to select the ‘best’ and ‘worst’ option (‘most important’ and ‘least important’ media in our investigation) from a short list of alternatives it is possible to uncover...

  18. Inference in HIV dynamics models via hierarchical likelihood

    OpenAIRE

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...

  19. tmle : An R Package for Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Susan Gruber

    2012-11-01

    Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.

  20. Nonparametric likelihood based estimation of linear filters for point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2015-01-01

    result is a representation of the gradient of the log-likelihood, which we use to derive computable approximations of the log-likelihood and the gradient by time discretization. These approximations are then used to minimize the approximate penalized log-likelihood. For time and memory efficiency...

  1. Empirical Bayes Estimates of Domain Scores under Binomial and Hypergeometric Distributions for Test Scores.

    Science.gov (United States)

    Lin, Miao-Hsiang; Hsiung, Chao A.

    1994-01-01

    Two simple empirical approximate Bayes estimators are introduced for estimating domain scores under binomial and hypergeometric distributions respectively. Criteria are established regarding use of these functions over maximum likelihood estimation counterparts. (SLD)

  2. Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

    Science.gov (United States)

    Boedeker, Peter

    2017-01-01

    Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…

  3. MLDS: Maximum Likelihood Difference Scaling in R

    Directory of Open Access Journals (Sweden)

    Kenneth Knoblauch

    2008-01-01

    Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.

  4. Parameter likelihood of intrinsic ellipticity correlations

    CERN Document Server

    Capranico, Federica; Schaefer, Bjoern Malte

    2012-01-01

    Subject of this paper are the statistical properties of ellipticity alignments between galaxies evoked by their coupled angular momenta. Starting from physical angular momentum models, we bridge the gap towards ellipticity correlations, ellipticity spectra and derived quantities such as aperture moments, comparing the intrinsic signals with those generated by gravitational lensing, with the projected galaxy sample of EUCLID in mind. We investigate the dependence of intrinsic ellipticity correlations on cosmological parameters and show that intrinsic ellipticity correlations give rise to non-Gaussian likelihoods as a result of nonlinear functional dependencies. Comparing intrinsic ellipticity spectra to weak lensing spectra we quantify the magnitude of their contaminating effect on the estimation of cosmological parameters and find that biases on dark energy parameters are very small in an angular-momentum based model in contrast to the linear alignment model commonly used. Finally, we quantify whether intrins...

  5. Dishonestly increasing the likelihood of winning

    Directory of Open Access Journals (Sweden)

    Shaul Shalvi

    2012-05-01

    Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.

  6. Transfer Entropy as a Log-likelihood Ratio

    CERN Document Server

    Barnett, Lionel

    2012-01-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the neurosciences, econometrics and the analysis of complex system dynamics in diverse fields. We show that for a class of parametrised partial Markov models for jointly stochastic processes in discrete time, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. The result generalises the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression. In the general case, an asymptotic $\\chi^2$ distribution for the model transfer entropy estimator is established.

  7. On the Performance of Maximum Likelihood Inverse Reinforcement Learning

    CERN Document Server

    Ratia, Héctor; Martinez-Cantin, Ruben

    2012-01-01

    Inverse reinforcement learning (IRL) addresses the problem of recovering a task description given a demonstration of the optimal policy used to solve such a task. The optimal policy is usually provided by an expert or teacher, making IRL specially suitable for the problem of apprenticeship learning. The task description is encoded in the form of a reward function of a Markov decision process (MDP). Several algorithms have been proposed to find the reward function corresponding to a set of demonstrations. One of the algorithms that has provided best results in different applications is a gradient method to optimize a policy squared error criterion. On a parallel line of research, other authors have presented recently a gradient approximation of the maximum likelihood estimate of the reward signal. In general, both approaches approximate the gradient estimate and the criteria at different stages to make the algorithm tractable and efficient. In this work, we provide a detailed description of the different metho...

  8. Maximum likelihood identification of aircraft stability and control derivatives

    Science.gov (United States)

    Mehra, R. K.; Stepner, D. E.; Tyler, J. S.

    1974-01-01

    Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.

  9. Housing arrangement and location determine the likelihood of housing loss due to wildfire

    Science.gov (United States)

    Syphard, Alexandra D.; Keeley, Jon E.; Massada, Avi Bar; Brennan, Teresa J.; Radeloff, Volker C.

    2012-01-01

    Surging wildfires across the globe are contributing to escalating residential losses and have major social, economic, and ecological consequences. The highest losses in the U.S. occur in southern California, where nearly 1000 homes per year have been destroyed by wildfires since 2000. Wildfire risk reduction efforts focus primarily on fuel reduction and, to a lesser degree, on house characteristics and homeowner responsibility. However, the extent to which land use planning could alleviate wildfire risk has been largely missing from the debate despite large numbers of homes being placed in the most hazardous parts of the landscape. Our goal was to examine how housing location and arrangement affects the likelihood that a home will be lost when a wildfire occurs. We developed an extensive geographic dataset of structure locations, including more than 5500 structures that were destroyed or damaged by wildfire since 2001, and identified the main contributors to property loss in two extensive, fire-prone regions in southern California. The arrangement and location of structures strongly affected their susceptibility to wildfire, with property loss most likely at low to intermediate structure densities and in areas with a history of frequent fire. Rates of structure loss were higher when structures were surrounded by wildland vegetation, but were generally higher in herbaceous fuel types than in higher fuel-volume woody types. Empirically based maps developed using housing pattern and location performed better in distinguishing hazardous from non-hazardous areas than maps based on fuel distribution. The strong importance of housing arrangement and location indicate that land use planning may be a critical tool for reducing fire risk, but it will require reliable delineations of the most hazardous locations.

  10. Housing arrangement and location determine the likelihood of housing loss due to wildfire.

    Directory of Open Access Journals (Sweden)

    Alexandra D Syphard

    Full Text Available Surging wildfires across the globe are contributing to escalating residential losses and have major social, economic, and ecological consequences. The highest losses in the U.S. occur in southern California, where nearly 1000 homes per year have been destroyed by wildfires since 2000. Wildfire risk reduction efforts focus primarily on fuel reduction and, to a lesser degree, on house characteristics and homeowner responsibility. However, the extent to which land use planning could alleviate wildfire risk has been largely missing from the debate despite large numbers of homes being placed in the most hazardous parts of the landscape. Our goal was to examine how housing location and arrangement affects the likelihood that a home will be lost when a wildfire occurs. We developed an extensive geographic dataset of structure locations, including more than 5500 structures that were destroyed or damaged by wildfire since 2001, and identified the main contributors to property loss in two extensive, fire-prone regions in southern California. The arrangement and location of structures strongly affected their susceptibility to wildfire, with property loss most likely at low to intermediate structure densities and in areas with a history of frequent fire. Rates of structure loss were higher when structures were surrounded by wildland vegetation, but were generally higher in herbaceous fuel types than in higher fuel-volume woody types. Empirically based maps developed using housing pattern and location performed better in distinguishing hazardous from non-hazardous areas than maps based on fuel distribution. The strong importance of housing arrangement and location indicate that land use planning may be a critical tool for reducing fire risk, but it will require reliable delineations of the most hazardous locations.

  11. Empirical Information Metrics for Prediction Power and Experiment Planning

    Directory of Open Access Journals (Sweden)

    Christopher Lee

    2011-01-01

    Full Text Available In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of inference. To integrate these approaches we note a common theme they share, namely the measurement of prediction power. We generalize this concept as an information metric, subject to several requirements: Calculation of the metric must be objective or model-free; unbiased; convergent; probabilistically bounded; and low in computational complexity. Unfortunately, widely used model selection metrics such as Maximum Likelihood, the Akaike Information Criterion and Bayesian Information Criterion do not necessarily meet all these requirements. We define four distinct empirical information metrics measured via sampling, with explicit Law of Large Numbers convergence guarantees, which meet these requirements: Ie, the empirical information, a measure of average prediction power; Ib, the overfitting bias information, which measures selection bias in the modeling procedure; Ip, the potential information, which measures the total remaining information in the observations not yet discovered by the model; and Im, the model information, which measures the model’s extrapolation prediction power. Finally, we show that Ip + Ie, Ip + Im, and Ie — Im are fixed constants for a given observed dataset (i.e. prediction target, independent of the model, and thus represent a fundamental subdivision of the total information contained in the observations. We discuss the application of these metrics to modeling and experiment planning.    

  12. Wine authenticity verification as a forensic problem: an application of likelihood ratio test to label verification.

    Science.gov (United States)

    Martyna, Agnieszka; Zadora, Grzegorz; Stanimirova, Ivana; Ramos, Daniel

    2014-05-01

    The aim of the study was to investigate the applicability of the likelihood ratio (LR) approach for verifying the authenticity of 178 samples of 3 Italian wine brands: Barolo, Barbera, and Grignolino described by 27 parameters describing their chemical compositions. Since the problem of products authenticity may be of forensic interest, the likelihood ratio approach, expressing the role of the forensic expert, was proposed for determining the true origin of wines. It allows us to analyse the evidence in the context of two hypotheses, that the object belongs to one or another wine brand. Various LR models were the subject of the research and their accuracy was evaluated by the Empirical cross entropy (ECE) approach. The rates of correct classifications for the proposed models were higher than 90% and their performance evaluated by ECE was satisfactory.

  13. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  14. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  15. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Martínez Santos, D.; Olive, K.A.; Richards, A.; de Vries, K.J.; Weiglein, G.

    2016-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  16. REDUCING THE LIKELIHOOD OF LONG TENNIS MATCHES

    Directory of Open Access Journals (Sweden)

    Tristan Barnett

    2006-12-01

    Full Text Available Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match

  17. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  18. Likelihood Analysis of Supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY; Costa, J. C. [Imperial Coll., London; Sakurai, K. [Warsaw U.; Borsato, M. [Santiago de Compostela U.; Buchmueller, O. [Imperial Coll., London; Cavanaugh, R. [Illinois U., Chicago; Chobanova, V. [Santiago de Compostela U.; Citron, M. [Imperial Coll., London; De Roeck, A. [Antwerp U.; Dolan, M. J. [Melbourne U.; Ellis, J. R. [King' s Coll. London; Flächer, H. [Bristol U.; Heinemeyer, S. [Madrid, IFT; Isidori, G. [Zurich U.; Lucio, M. [Santiago de Compostela U.; Martínez Santos, D. [Santiago de Compostela U.; Olive, K. A. [Minnesota U., Theor. Phys. Inst.; Richards, A. [Imperial Coll., London; de Vries, K. J. [Imperial Coll., London; Weiglein, G. [DESY

    2016-10-31

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel ${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ${\\tilde \

  19. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  20. On the Loss of Information in Conditional Maximum Likelihood Estimation of Item Parameters.

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2000-01-01

    Shows that the concept of F-information, a generalization of Fisher information, is a useful took for evaluating the loss of information in conditional maximum likelihood (CML) estimation. With the F-information concept it is possible to investigate the conditions under which there is no loss of information in CML estimation and to quantify a loss…

  1. Automatic Optimism: The Affective Basis of Judgments about the Likelihood of Future Events

    Science.gov (United States)

    Lench, Heather C.

    2009-01-01

    People generally judge that the future will be consistent with their desires, but the reason for this desirability bias is unclear. This investigation examined whether affective reactions associated with future events are the mechanism through which desires influence likelihood judgments. In 4 studies, affective reactions were manipulated for…

  2. Relations between the likelihood ratios for 2D continuous and discrete time stochastic processes

    NARCIS (Netherlands)

    Luesink, Rob

    1991-01-01

    The author considers the likelihood ratio for 2D processes. In order to detect this ratio, it is necessary to compute the determinant of the covariance operator of the signal-plus-noise observation process. In the continuous case, this is in general a difficult problem. For cyclic processes, using F

  3. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  4. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd

  5. On penalized likelihood estimation for a non-proportional hazards regression model.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  6. A maximum likelihood model for fitting power functions with data uncertainty: A case study on the relationship between body lengths and masses for Sciuridae species worldwide

    Directory of Open Access Journals (Sweden)

    Youhua Chen

    2016-09-01

    Full Text Available In this report, a maximum likelihood model is developed to incorporate data uncertainty in response and explanatory variables when fitting power-law bivariate relationships in ecology and evolution. This simple likelihood model is applied to an empirical data set related to the allometric relationship between body mass and length of Sciuridae species worldwide. The results show that the values of parameters estimated by the proposed likelihood model are substantially different from those fitted by the nonlinear least-of-square (NLOS method. Accordingly, the power-law models fitted by both methods have different curvilinear shapes. These discrepancies are caused by the integration of measurement errors in the proposed likelihood model, in which NLOS method fails to do. Because the current likelihood model and the NLOS method can show different results, the inclusion of measurement errors may offer new insights into the interpretation of scaling or power laws in ecology and evolution.

  7. Empirical Bayes Estimation in the Rasch Model: A Simulation.

    Science.gov (United States)

    de Gruijter, Dato N. M.

    In a situation where the population distribution of latent trait scores can be estimated, the ordinary maximum likelihood estimator of latent trait scores may be improved upon by taking the estimated population distribution into account. In this paper empirical Bayes estimators are compared with the liklihood estimator for three samples of 300…

  8. Survey data on entrepreneurs׳ subjective plan and perceptions of the likelihood of success.

    Science.gov (United States)

    Vuong, Quan Hoang

    2016-03-01

    Entrepreneurship is an important economic process in both developed and developing worlds. Nonetheless, many of its concepts appear to be difficult to 'operationalize' due to lack of empirical data; and this is particularly true with emerging economy. The data set described in this paper is available in Mendeley Data׳s "Vietnamese entrepreneurs' decisiveness and perceptions of the likelihood of success/continuity, Vuong (2015) [1]" http://dx.doi.org/10.17632/kbrtrf6hh4.2; and can enable the modeling after useful discrete data models such as BCL.

  9. Survey data on entrepreneurs׳ subjective plan and perceptions of the likelihood of success

    Directory of Open Access Journals (Sweden)

    Quan Hoang Vuong

    2016-03-01

    Full Text Available Entrepreneurship is an important economic process in both developed and developing worlds. Nonetheless, many of its concepts appear to be difficult to ‘operationalize’ due to lack of empirical data; and this is particularly true with emerging economy. The data set described in this paper is available in Mendeley Data׳s “Vietnamese entrepreneurs’ decisiveness and perceptions of the likelihood of success/continuity, Vuong (2015 [1]” http://dx.doi.org/10.17632/kbrtrf6hh4.2; and can enable the modeling after useful discrete data models such as BCL.

  10. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  11. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others

    2016-10-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.

  12. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)

    2017-02-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)

  13. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...... d and b, and prove that they converge in distribution. We use the results to prove consistency of the maximum likelihood estimator for d,b in a large compact subset of {1/2...

  14. Estimating nonlinear dynamic equilibrium economies: a likelihood approach

    OpenAIRE

    2004-01-01

    This paper presents a framework to undertake likelihood-based inference in nonlinear dynamic equilibrium economies. The authors develop a sequential Monte Carlo algorithm that delivers an estimate of the likelihood function of the model using simulation methods. This likelihood can be used for parameter estimation and for model comparison. The algorithm can deal both with nonlinearities of the economy and with the presence of non-normal shocks. The authors show consistency of the estimate and...

  15. Likelihood ratios: Clinical application in day-to-day practice

    Directory of Open Access Journals (Sweden)

    Parikh Rajul

    2009-01-01

    Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.

  16. Empirical microeconomics action functionals

    Science.gov (United States)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  17. Event-related fMRI studies of false memory: An Activation Likelihood Estimation meta-analysis.

    Science.gov (United States)

    Kurkela, Kyle A; Dennis, Nancy A

    2016-01-29

    Over the last two decades, a wealth of research in the domain of episodic memory has focused on understanding the neural correlates mediating false memories, or memories for events that never happened. While several recent qualitative reviews have attempted to synthesize this literature, methodological differences amongst the empirical studies and a focus on only a sub-set of the findings has limited broader conclusions regarding the neural mechanisms underlying false memories. The current study performed a voxel-wise quantitative meta-analysis using activation likelihood estimation to investigate commonalities within the functional magnetic resonance imaging (fMRI) literature studying false memory. The results were broken down by memory phase (encoding, retrieval), as well as sub-analyses looking at differences in baseline (hit, correct rejection), memoranda (verbal, semantic), and experimental paradigm (e.g., semantic relatedness and perceptual relatedness) within retrieval. Concordance maps identified significant overlap across studies for each analysis. Several regions were identified in the general false retrieval analysis as well as multiple sub-analyses, indicating their ubiquitous, yet critical role in false retrieval (medial superior frontal gyrus, left precentral gyrus, left inferior parietal cortex). Additionally, several regions showed baseline- and paradigm-specific effects (hit/perceptual relatedness: inferior and middle occipital gyrus; CRs: bilateral inferior parietal cortex, precuneus, left caudate). With respect to encoding, analyses showed common activity in the left middle temporal gyrus and anterior cingulate cortex. No analysis identified a common cluster of activation in the medial temporal lobe.

  18. Empirical Bayesian significance measure of neuronal spike response.

    Science.gov (United States)

    Oba, Shigeyuki; Nakae, Ken; Ikegaya, Yuji; Aki, Shunsuke; Yoshimoto, Junichiro; Ishii, Shin

    2016-05-21

    Functional connectivity analyses of multiple neurons provide a powerful bottom-up approach to reveal functions of local neuronal circuits by using simultaneous recording of neuronal activity. A statistical methodology, generalized linear modeling (GLM) of the spike response function, is one of the most promising methodologies to reduce false link discoveries arising from pseudo-correlation based on common inputs. Although recent advancement of fluorescent imaging techniques has increased the number of simultaneously recoded neurons up to the hundreds or thousands, the amount of information per pair of neurons has not correspondingly increased, partly because of the instruments' limitations, and partly because the number of neuron pairs increase in a quadratic manner. Consequently, the estimation of GLM suffers from large statistical uncertainty caused by the shortage in effective information. In this study, we propose a new combination of GLM and empirical Bayesian testing for the estimation of spike response functions that enables both conservative false discovery control and powerful functional connectivity detection. We compared our proposed method's performance with those of sparse estimation of GLM and classical Granger causality testing. Our method achieved high detection performance of functional connectivity with conservative estimation of false discovery rate and q values in case of information shortage due to short observation time. We also showed that empirical Bayesian testing on arbitrary statistics in place of likelihood-ratio statistics reduce the computational cost without decreasing the detection performance. When our proposed method was applied to a functional multi-neuron calcium imaging dataset from the rat hippocampal region, we found significant functional connections that are possibly mediated by AMPA and NMDA receptors. The proposed empirical Bayesian testing framework with GLM is promising especially when the amount of information per a

  19. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    Science.gov (United States)

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  20. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  1. 商业医疗保险损失分析:基于广义线性模型的实证研究%Risk Factors of Losses of Commercial Medical Insurance: An Empirical Analysis Using Generalized Linear Models

    Institute of Scientific and Technical Information of China (English)

    仇春涓; 陈滔

    2012-01-01

    本文使用广义线性模型对商业医疗保险损失进行建模,并用某商业保险公司的医疗保险赔付数据进行了实证检验,结果表明,在影响医疗保险损失的诸多因素中,住院天数、医院级别、地区、保障档次等都是显著的因素,而性别和小于60岁以下年龄段内年龄则并不是显著因素,这些结论给医疗保险的经营和风险控制带来实际的意义.%The risk factors of commercial medical insurance losses are investigated in this paper. We conduct an empirical analysis by fitting the Gamma generalized linear model to a commercial medical insurance's claims data. The result indicates that among the many candidate risk factors of medical insurance losses, the days of hospital stay, the hospital levels, the area where the insurance business is applied and the insurance level are significant. On the contrary, the gender and age are insignificant. Finally, some suggestions are presented, which we believe to be helpful for future medical insurance's operations and management.

  2. Empirical and Computational Tsunami Probability

    Science.gov (United States)

    Geist, E. L.; Parsons, T.; ten Brink, U. S.; Lee, H. J.

    2008-12-01

    A key component in assessing the hazard posed by tsunamis is quantification of tsunami likelihood or probability. To determine tsunami probability, one needs to know the distribution of tsunami sizes and the distribution of inter-event times. Both empirical and computational methods can be used to determine these distributions. Empirical methods rely on an extensive tsunami catalog and hence, the historical data must be carefully analyzed to determine whether the catalog is complete for a given runup or wave height range. Where site-specific historical records are sparse, spatial binning techniques can be used to perform a regional, empirical analysis. Global and site-specific tsunami catalogs suggest that tsunami sizes are distributed according to a truncated or tapered power law and inter-event times are distributed according to an exponential distribution modified to account for clustering of events in time. Computational methods closely follow Probabilistic Seismic Hazard Analysis (PSHA), where size and inter-event distributions are determined for tsunami sources, rather than tsunamis themselves as with empirical analysis. In comparison to PSHA, a critical difference in the computational approach to tsunami probabilities is the need to account for far-field sources. The three basic steps in computational analysis are (1) determination of parameter space for all potential sources (earthquakes, landslides, etc.), including size and inter-event distributions; (2) calculation of wave heights or runup at coastal locations, typically performed using numerical propagation models; and (3) aggregation of probabilities from all sources and incorporation of uncertainty. It is convenient to classify two different types of uncertainty: epistemic (or knowledge-based) and aleatory (or natural variability). Correspondingly, different methods have been traditionally used to incorporate uncertainty during aggregation, including logic trees and direct integration. Critical

  3. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.;

    2014-01-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...

  4. A likelihood method to cross-calibrate air-shower detectors

    CERN Document Server

    Dembinski, H P; Mariş, I C; Roth, M; Veberič, D

    2016-01-01

    We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated t...

  5. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  6. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  7. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Science.gov (United States)

    Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W

    2015-04-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  8. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  9. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  10. A conditional likelihood approach for regression analysis using biomarkers measured with batch-specific error.

    Science.gov (United States)

    Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi

    2012-12-20

    Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.

  11. ON THE LIKELIHOOD OF PLANET FORMATION IN CLOSE BINARIES

    Energy Technology Data Exchange (ETDEWEB)

    Jang-Condell, Hannah, E-mail: hjangcon@uwyo.edu [Department of Physics and Astronomy, University of Wyoming, 1000 East University, Department 3905, Laramie, WY 82071 (United States)

    2015-02-01

    To date, several exoplanets have been discovered orbiting stars with close binary companions (a ≲ 30 AU). The fact that planets can form in these dynamically challenging environments implies that planet formation must be a robust process. The initial protoplanetary disks in these systems from which planets must form should be tidally truncated to radii of a few AU, which indicates that the efficiency of planet formation must be high. Here, we examine the truncation of circumstellar protoplanetary disks in close binary systems, studying how the likelihood of planet formation is affected over a range of disk parameters. If the semimajor axis of the binary is too small or its eccentricity is too high, the disk will have too little mass for planet formation to occur. However, we find that the stars in the binary systems known to have planets should have once hosted circumstellar disks that were capable of supporting planet formation despite their truncation. We present a way to characterize the feasibility of planet formation based on binary orbital parameters such as stellar mass, companion mass, eccentricity, and semimajor axis. Using this measure, we can quantify the robustness of planet formation in close binaries and better understand the overall efficiency of planet formation in general.

  12. Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise.

    Science.gov (United States)

    Dessimoz, Christophe; Gil, Manuel

    2008-06-23

    The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML) on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA). Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity). Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.

  13. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  14. Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.

    Science.gov (United States)

    Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng

    2017-04-17

    added with Gaussian distributed noise. Meanwhile clinical breast ultrasound images are used to visually evaluate the effectiveness of the method. To examine the performance, comparison tests between the proposed RSBF and six state-of-the-art methods for ultrasound speckle removal are performed on simulated ultrasound images with various noise and speckle levels. The results of the proposed RSBF are satisfying since the Gaussian noise and the Rayleigh speckle are greatly suppressed. The proposed method can improve the SNRs of the enhanced images to nearly 15 and 13 dB compared with images corrupted by speckle as well as images contaminated by speckle and noise under various SNR levels, respectively. The RSBF is effective in enhancing edge while smoothing the speckle and noise in clinical ultrasound images. In the comparison experiments, the proposed method demonstrates its superiority in accuracy and robustness for denoising and edge preserving under various levels of noise and speckle in terms of visual quality as well as numeric metrics, such as peak signal to noise ratio, SNR and root mean squared error. The experimental results show that the proposed method is effective for removing the speckle and the background noise in ultrasound images. The main reason is that it performs a "detect and replace" two-step mechanism. The advantages of the proposed RBSF lie in two aspects. Firstly, each central pixel is classified as noise, speckle or noise-free texture according to the absolute difference between the target pixel and the reference median. Subsequently, the Rayleigh-maximum-likelihood filter and the bilateral filter are switched to eliminate speckle and noise, respectively, while the noise-free pixels are unaltered. Therefore, it is implemented with better accuracy and robustness than the traditional methods. Generally, these traits declare that the proposed RSBF would have significant clinical application.

  15. Maximum-likelihood estimation of haplotype frequencies in nuclear families.

    Science.gov (United States)

    Becker, Tim; Knapp, Michael

    2004-07-01

    The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.

  16. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  17. Maximum Likelihood Factor Structure of the Family Environment Scale.

    Science.gov (United States)

    Fowler, Patrick C.

    1981-01-01

    Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)

  18. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2012-01-01

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters......likelihood estimators. To this end we prove weak convergence of the conditional likelihood as a continuous stochastic...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...

  19. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values Xº-n, n = 0, 1, ..., under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to ?find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II...

  20. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values X0-n, n = 0, 1,...,under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d-b; where d ≥ b > 1/2 are parameters to be estimated. We model the data X1,...,XT given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II....

  1. Young adult consumers' media usage and online purchase likelihood

    African Journals Online (AJOL)

    Young adult consumers' media usage and online purchase likelihood. ... in new media applications such as the internet, email, blogging, twitter and social networks. ... Convenience sampling resulted in 1 298 completed questionnaires.

  2. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  4. Conditional likelihood methods for haplotype-based association analysis using matched case-control data.

    Science.gov (United States)

    Chen, Jinbo; Rodriguez, Carmen

    2007-12-01

    Genetic epidemiologists routinely assess disease susceptibility in relation to haplotypes, that is, combinations of alleles on a single chromosome. We study statistical methods for inferring haplotype-related disease risk using single nucleotide polymorphism (SNP) genotype data from matched case-control studies, where controls are individually matched to cases on some selected factors. Assuming a logistic regression model for haplotype-disease association, we propose two conditional likelihood approaches that address the issue that haplotypes cannot be inferred with certainty from SNP genotype data (phase ambiguity). One approach is based on the likelihood of disease status conditioned on the total number of cases, genotypes, and other covariates within each matching stratum, and the other is based on the joint likelihood of disease status and genotypes conditioned only on the total number of cases and other covariates. The joint-likelihood approach is generally more efficient, particularly for assessing haplotype-environment interactions. Simulation studies demonstrated that the first approach was more robust to model assumptions on the diplotype distribution conditioned on environmental risk variables and matching factors in the control population. We applied the two methods to analyze a matched case-control study of prostate cancer.

  5. Validation of software for calculating the likelihood ratio for parentage and kinship.

    Science.gov (United States)

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  6. Supervisor Autonomy and Considerate Leadership Style are Associated with Supervisors’ Likelihood to Accommodate Back Injured Workers

    Science.gov (United States)

    McGuire, Connor; Kristman, Vicki L; Williams-Whitt, Kelly; Reguly, Paula; Shaw, William; Soklaridis, Sophie

    2015-01-01

    PURPOSE To determine the association between supervisors’ leadership style and autonomy and supervisors’ likelihood of supporting job accommodations for back-injured workers. METHODS A cross-sectional study of supervisors from Canadian and US employers was conducted using a web-based, self-report questionnaire that included a case vignette of a back-injured worker. Autonomy and two dimensions of leadership style (considerate and initiating structure) were included as exposures. The outcome, supervisors’ likeliness to support job accommodation, was measured with the Job Accommodation Scale. We conducted univariate analyses of all variables and bivariate analyses of the JAS score with each exposure and potential confounding factor. We used multivariable generalized linear models to control for confounding factors. RESULTS A total of 796 supervisors participated. Considerate leadership style (β= .012; 95% CI: .009–.016) and autonomy (β= .066; 95% CI: .025–.11) were positively associated with supervisors’ likelihood to accommodate after adjusting for appropriate confounding factors. An initiating structure leadership style was not significantly associated with supervisors’ likelihood to accommodate (β = .0018; 95% CI: −.0026–.0061) after adjusting for appropriate confounders. CONCLUSIONS Autonomy and a considerate leadership style were positively associated with supervisors’ likelihood to accommodate a back-injured worker. Providing supervisors with more autonomy over decisions of accommodation and developing their considerate leadership style may aid in increasing work accommodation for back-injured workers and preventing prolonged work disability. PMID:25595332

  7. Second order pseudo-maximum likelihood estimation and conditional variance misspecification

    OpenAIRE

    Lejeune, Bernard

    1997-01-01

    In this paper, we study the behavior of second order pseudo-maximum likelihood estimators under conditional variance misspecification. We determine sufficient and essentially necessary conditions for such a estimator to be, regardless of the conditional variance (mis)specification, consistent for the mean parameters when the conditional mean is correctly specified. These conditions implie that, even if mean and variance parameters vary independently, standard PML2 estimators are generally not...

  8. POET: Parameterized Optimization for Empirical Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D

    2007-01-29

    The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.

  9. A notion of graph likelihood and an infinite monkey theorem

    CERN Document Server

    Banerji, Christopher R S; Severini, Simone

    2013-01-01

    We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.

  10. A notion of graph likelihood and an infinite monkey theorem

    Science.gov (United States)

    Banerji, Christopher R. S.; Mansour, Toufik; Severini, Simone

    2014-01-01

    We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.

  11. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  12. Estimating dynamic equilibrium economies: linear versus nonlinear likelihood

    OpenAIRE

    2004-01-01

    This paper compares two methods for undertaking likelihood-based inference in dynamic equilibrium economies: a sequential Monte Carlo filter proposed by Fernández-Villaverde and Rubio-Ramírez (2004) and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. The authors report two main results...

  13. Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization

    OpenAIRE

    Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue

    2010-01-01

    This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...

  14. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  15. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    Science.gov (United States)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  16. Employee Likelihood of Purchasing Health Insurance using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2012-01-01

    Full Text Available Many believe that employees health and economic factors plays an important role in their likelihood to purchase health insurance. However decision to purchase health insurance is not trivial matters as many risk factors that influence decision. This paper presents a decision model using fuzzy inference system to identify the likelihoods of purchasing health insurance based on the selected risk factors. To build the likelihoods, data from one hundred and twenty eight employees at five organizations under the purview of Kota Star Municipality Malaysia were collected to provide input data. Three risk factors were considered as the input of the system including age, salary and risk of having illness. The likelihoods of purchasing health insurance was the output of the system and defined in three linguistic terms of Low, Medium and High. Input and output data were governed by the Mamdani inference rules of the system to decide the best linguistic term. The linguistic terms that describe the likelihoods of purchasing health insurance were identified by the system based on the three risk factors. It is found that twenty seven employees were likely to purchase health insurance at Low level and fifty six employees show their likelihoods at High level. The usage of fuzzy inference system would offer possible justifications to set a new approach in identifying prospective health insurance purchasers.

  17. Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation

    CERN Document Server

    Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K

    2015-01-01

    We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...

  18. Empirical techniques in finance

    CERN Document Server

    Bhar, Ramaprasad

    2005-01-01

    This book offers the opportunity to study and experience advanced empi- cal techniques in finance and in general financial economics. It is not only suitable for students with an interest in the field, it is also highly rec- mended for academic researchers as well as the researchers in the industry. The book focuses on the contemporary empirical techniques used in the analysis of financial markets and how these are implemented using actual market data. With an emphasis on Implementation, this book helps foc- ing on strategies for rigorously combing finance theory and modeling technology to extend extant considerations in the literature. The main aim of this book is to equip the readers with an array of tools and techniques that will allow them to explore financial market problems with a fresh perspective. In this sense it is not another volume in eco- metrics. Of course, the traditional econometric methods are still valid and important; the contents of this book will bring in other related modeling topics tha...

  19. Unsupervised Learning and Generalization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The concept of generalization is defined for a general class of unsupervised learning machines. The generalization error is a straightforward extension of the corresponding concept for supervised learning, and may be estimated empirically using a test set or by statistical means-in close analogy ...... with supervised learning. The empirical and analytical estimates are compared for principal component analysis and for K-means clustering based density estimation...

  20. Autobiography After Empire

    DEFF Research Database (Denmark)

    Rasch, Astrid

    metropole, political legitimacy at the end of empire and settler family innocence, I argue that the writers engage with collective memories about empire in their personal recollections. Such collective narratives pattern autobiographies so that the same concerns and rhetoric recur in widely different...

  1. Verifying likelihoods for low template DNA profiles using multiple replicates

    Science.gov (United States)

    Steele, Christopher D.; Greenhalgh, Matthew; Balding, David J.

    2014-01-01

    To date there is no generally accepted method to test the validity of algorithms used to compute likelihood ratios (LR) evaluating forensic DNA profiles from low-template and/or degraded samples. An upper bound on the LR is provided by the inverse of the match probability, which is the usual measure of weight of evidence for standard DNA profiles not subject to the stochastic effects that are the hallmark of low-template profiles. However, even for low-template profiles the LR in favour of a true prosecution hypothesis should approach this bound as the number of profiling replicates increases, provided that the queried contributor is the major contributor. Moreover, for sufficiently many replicates the standard LR for mixtures is often surpassed by the low-template LR. It follows that multiple LTDNA replicates can provide stronger evidence for a contributor to a mixture than a standard analysis of a good-quality profile. Here, we examine the performance of the likeLTD software for up to eight replicate profiling runs. We consider simulated and laboratory-generated replicates as well as resampling replicates from a real crime case. We show that LRs generated by likeLTD usually do exceed the mixture LR given sufficient replicates, are bounded above by the inverse match probability and do approach this bound closely when this is expected. We also show good performance of likeLTD even when a large majority of alleles are designated as uncertain, and suggest that there can be advantages to using different profiling sensitivities for different replicates. Overall, our results support both the validity of the underlying mathematical model and its correct implementation in the likeLTD software. PMID:25082140

  2. Empirical Philosophy of Science

    DEFF Research Database (Denmark)

    Mansnerus, Erika; Wagenknecht, Susann

    2015-01-01

    knowledge takes place through the integration of the empirical or historical research into the philosophical studies, as Chang, Nersessian, Thagard and Schickore argue in their work. Building upon their contributions we will develop a blueprint for an Empirical Philosophy of Science that draws upon......Empirical insights are proven fruitful for the advancement of Philosophy of Science, but the integration of philosophical concepts and empirical data poses considerable methodological challenges. Debates in Integrated History and Philosophy of Science suggest that the advancement of philosophical...... qualitative methods from the social sciences in order to advance our philosophical understanding of science in practice. We will regard the relationship between philosophical conceptualization and empirical data as an iterative dialogue between theory and data, which is guided by a particular ‘feeling with...

  3. Predicting the likelihood of altered streamflows at ungauged rivers across the conterminous United States

    Science.gov (United States)

    Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.

    2013-01-01

    An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.

  4. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    Science.gov (United States)

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  5. Regularization Parameter Selections via Generalized Information Criterion.

    Science.gov (United States)

    Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling

    2010-03-01

    We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material.

  6. Likelihood-based inference for cointegration with nonlinear error-correction

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders Christian

    2010-01-01

    We consider a class of nonlinear vector error correction models where the transfer function (or loadings) of the stationary relationships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long-run cointegration parameters, and the short-run parameters. Asymptotic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normality can be found. A simulation study...

  7. The VGAM Package for Capture-Recapture Data Using the Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Thomas W. Yee

    2015-06-01

    Full Text Available It is well known that using individual covariate information (such as body weight or gender to model heterogeneity in capture-recapture (CR experiments can greatly enhance inferences on the size of a closed population. Since individual covariates are only observable for captured individuals, complex conditional likelihood methods are usually required and these do not constitute a standard generalized linear model (GLM family. Modern statistical techniques such as generalized additive models (GAMs, which allow a relaxing of the linearity assumptions on the covariates, are readily available for many standard GLM families. Fortunately, a natural statistical framework for maximizing conditional likelihoods is available in the Vector GLM and Vector GAM classes of models. We present several new R functions (implemented within the VGAM package specifically developed to allow the incorporation of individual covariates in the analysis of closed population CR data using a GLM/GAM-like approach and the conditional likelihood. As a result, a wide variety of practical tools are now readily available in the VGAM object oriented framework. We discuss and demonstrate their advantages, features and flexibility using the new VGAM CR functions on several examples.

  8. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  9. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  10. Theological reflections on empire

    Directory of Open Access Journals (Sweden)

    Allan A. Boesak

    2009-11-01

    Full Text Available Since the meeting of the World Alliance of Reformed Churches in Accra, Ghana (2004, and the adoption of the Accra Declaration, a debate has been raging in the churches about globalisation, socio-economic justice, ecological responsibility, political and cultural domination and globalised war. Central to this debate is the concept of empire and the way the United States is increasingly becoming its embodiment. Is the United States a global empire? This article argues that the United States has indeed become the expression of a modern empire and that this reality has considerable consequences, not just for global economics and politics but for theological refl ection as well.

  11. Maximum likelihood for genome phylogeny on gene content.

    Science.gov (United States)

    Zhang, Hongmei; Gu, Xun

    2004-01-01

    With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.

  12. Factors Influencing the Intended Likelihood of Exposing Sexual Infidelity.

    Science.gov (United States)

    Kruger, Daniel J; Fisher, Maryanne L; Fitzgerald, Carey J

    2015-08-01

    There is a considerable body of literature on infidelity within romantic relationships. However, there is a gap in the scientific literature on factors influencing the likelihood of uninvolved individuals exposing sexual infidelity. Therefore, we devised an exploratory study examining a wide range of potentially relevant factors. Based in part on evolutionary theory, we anticipated nine potential domains or types of influences on the likelihoods of exposing or protecting cheaters, including kinship, strong social alliances, financial support, previous relationship behaviors (including infidelity and abuse), potential relationship transitions, stronger sexual and emotional aspects of the extra-pair relationship, and disease risk. The pattern of results supported these predictions (N = 159 men, 328 women). In addition, there appeared to be a small positive bias for participants to report infidelity when provided with any additional information about the situation. Overall, this study contributes a broad initial description of factors influencing the predicted likelihood of exposing sexual infidelity and encourages further studies in this area.

  13. Joint analysis of prevalence and incidence data using conditional likelihood.

    Science.gov (United States)

    Saarela, Olli; Kulathinal, Sangita; Karvanen, Juha

    2009-07-01

    Disease prevalence is the combined result of duration, disease incidence, case fatality, and other mortality. If information is available on all these factors, and on fixed covariates such as genotypes, prevalence information can be utilized in the estimation of the effects of the covariates on disease incidence. Study cohorts that are recruited as cross-sectional samples and subsequently followed up for disease events of interest produce both prevalence and incidence information. In this paper, we make use of both types of information using a likelihood, which is conditioned on survival until the cross section. In a simulation study making use of real cohort data, we compare the proposed conditional likelihood method to a standard analysis where prevalent cases are omitted and the likelihood expression is conditioned on healthy status at the cross section.

  14. Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs

    CERN Document Server

    Desjardins, Guillaume; Bengio, Yoshua

    2010-01-01

    Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...

  15. IMPROVING VOICE ACTIVITY DETECTION VIA WEIGHTING LIKELIHOOD AND DIMENSION REDUCTION

    Institute of Scientific and Technical Information of China (English)

    Wang Huanliang; Han Jiqing; Li Haifeng; Zheng Tieran

    2008-01-01

    The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD.Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little.Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.

  16. How to Maximize the Likelihood Function for a DSGE Model

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003......). Following these extensions, we examine the ability of the two routines to maximize the likelihood function for a sequence of test economies. Our results show that the CMA- ES routine clearly outperforms Simulated Annealing in its ability to find the global optimum and in efficiency. With 10 unknown...... structural parameters in the likelihood function, the CMA-ES routine finds the global optimum in 95% of our test economies compared to 89% for Simulated Annealing. When the number of unknown structural parameters in the likelihood function increases to 20 and 35, then the CMA-ES routine finds the global...

  17. LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2004-01-01

    The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi......The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time....... The considered example is a ship sailing with a given speed through a Gaussian wave field....

  18. Unbinned likelihood maximisation framework for neutrino clustering in Python

    Energy Technology Data Exchange (ETDEWEB)

    Coenders, Stefan [Technische Universitaet Muenchen, Boltzmannstr. 2, 85748 Garching (Germany)

    2016-07-01

    Albeit having detected an astrophysical neutrino flux with IceCube, sources of astrophysical neutrinos remain hidden up to now. A detection of a neutrino point source is a smoking gun for hadronic processes and acceleration of cosmic rays. The search for neutrino sources has many degrees of freedom, for example steady versus transient, point-like versus extended sources, et cetera. Here, we introduce a Python framework designed for unbinned likelihood maximisations as used in searches for neutrino point sources by IceCube. Implementing source scenarios in a modular way, likelihood searches on various kinds can be implemented in a user-friendly way, without sacrificing speed and memory management.

  19. Semiparametric maximum likelihood for nonlinear regression with measurement errors.

    Science.gov (United States)

    Suh, Eun-Young; Schafer, Daniel W

    2002-06-01

    This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.

  20. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  1. Modified maximum likelihood registration based on information fusion

    Institute of Scientific and Technical Information of China (English)

    Yongqing Qi; Zhongliang Jing; Shiqiang Hu

    2007-01-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  2. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... is implemented using markov chain Monte Carlo (MCMC) methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference are compared...

  3. Parameter estimation in X-ray astronomy using maximum likelihood

    Science.gov (United States)

    Wachter, K.; Leach, R.; Kellogg, E.

    1979-01-01

    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  4. Empirical entropy in context

    CERN Document Server

    Gagie, Travis

    2007-01-01

    We trace the history of empirical entropy, touching briefly on its relation to Markov processes, normal numbers, Shannon entropy, the Chomsky hierarchy, Kolmogorov complexity, Ziv-Lempel compression, de Bruijn sequences and stochastic complexity.

  5. Likelihood-Based Hypothesis Tests for Brain Activation Detection From MRI Data Disturbed by Colored Noise: A Simulation Study

    NARCIS (Netherlands)

    Den Dekker, A.J.; Poot, D.H.J.; Bos, R.; Sijbers, J.

    2009-01-01

    Functional magnetic resonance imaging (fMRI) data that are corrupted by temporally colored noise are generally preprocessed (i.e., prewhitened or precolored) prior to functional activation detection. In this paper, we propose likelihood-based hypothesis tests that account for colored noise directly

  6. A Unified Maximum Likelihood Approach to Document Retrieval.

    Science.gov (United States)

    Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex

    2001-01-01

    Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)

  7. Profile likelihood maps of a 15-dimensional MSSM

    NARCIS (Netherlands)

    Strege, C.; Bertone, G.; Besjes, G.J.; Caron, S.; Ruiz de Austri, R.; Strubig, A.; Trotta, R.

    2014-01-01

    We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter

  8. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    1994-01-01

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est

  9. GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  10. GPU accelerated likelihoods for stereo-based articulated tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    2010-01-01

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  11. Community Level Disadvantage and the Likelihood of First Ischemic Stroke

    Directory of Open Access Journals (Sweden)

    Bernadette Boden-Albala

    2012-01-01

    Full Text Available Background and Purpose. Residing in “disadvantaged” communities may increase morbidity and mortality independent of individual social resources and biological factors. This study evaluates the impact of population-level disadvantage on incident ischemic stroke likelihood in a multiethnic urban population. Methods. A population based case-control study was conducted in an ethnically diverse community of New York. First ischemic stroke cases and community controls were enrolled and a stroke risk assessment performed. Data regarding population level economic indicators for each census tract was assembled using geocoding. Census variables were also grouped together to define a broader measure of collective disadvantage. We evaluated the likelihood of stroke for population-level variables controlling for individual social (education, social isolation, and insurance and vascular risk factors. Results. We age-, sex-, and race-ethnicity-matched 687 incident ischemic stroke cases to 1153 community controls. The mean age was 69 years: 60% women; 22% white, 28% black, and 50% Hispanic. After adjustment, the index of community level disadvantage (OR 2.0, 95% CI 1.7–2.1 was associated with increased stroke likelihood overall and among all three race-ethnic groups. Conclusion. Social inequalities measured by census tract data including indices of community disadvantage confer a significant likelihood of ischemic stroke independent of conventional risk factors.

  12. Bias Correction for Alternating Iterative Maximum Likelihood Estimators

    Institute of Scientific and Technical Information of China (English)

    Gang YU; Wei GAO; Ningzhong SHI

    2013-01-01

    In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.

  13. A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    1996-01-01

    We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the

  14. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...

  15. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...

  16. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  17. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß...

  18. Trimmed Likelihood-based Estimation in Binary Regression Models

    NARCIS (Netherlands)

    Cizek, P.

    2005-01-01

    The binary-choice regression models such as probit and logit are typically estimated by the maximum likelihood method.To improve its robustness, various M-estimation based procedures were proposed, which however require bias corrections to achieve consistency and their resistance to outliers is rela

  19. Planck 2013 results. XV. CMB power spectra and likelihood

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...

  20. Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.

    Science.gov (United States)

    McNeill, Brian W.; Stoltenberg, Cal D.

    1989-01-01

    Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…

  1. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  2. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    Science.gov (United States)

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  3. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  4. Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Claus Vogl

    2014-11-01

    Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.

  5. A Fast Method For Bounding The CMB Power Spectrum Likelihood Function

    CERN Document Server

    Borrill, J

    1998-01-01

    As the Cosmic Microwave Background (CMB) radiation is observed to higher and higher angular resolution the size of the resulting datasets becomes a serious constraint on their analysis. In particular current algorithms to determine the location of, and curvature at, the peak of the power spectrum likelihood function from a general $N_{p}$-pixel CMB sky map scale as $O(N_{p}^{3})$. Moreover the current best algorithm --- the quadratic estimator --- is a Newton-Raphson iterative scheme and so requires a `sufficiently good' starting point to guarantee convergence to the true maximum. Here we present an algorithm to calculate bounds on the likelihood function at any point in parameter space using Gaussian quadrature and show that, judiciously applied, it scales as only $O(N_{p}^{7/3})$. Although it provides no direct curvature information we show how this approach is well-suited both to estimating cosmological parameters directly and to providing a coarse map of the power spectrum likelihood function from which t...

  6. Identification of Sparse Neural Functional Connectivity using Penalized Likelihood Estimation and Basis Functions

    Science.gov (United States)

    Song, Dong; Wang, Haonan; Tu, Catherine Y.; Marmarelis, Vasilis Z.; Hampson, Robert E.; Deadwyler, Sam A.; Berger, Theodore W.

    2013-01-01

    One key problem in computational neuroscience and neural engineering is the identification and modeling of functional connectivity in the brain using spike train data. To reduce model complexity, alleviate overfitting, and thus facilitate model interpretation, sparse representation and estimation of functional connectivity is needed. Sparsities include global sparsity, which captures the sparse connectivities between neurons, and local sparsity, which reflects the active temporal ranges of the input-output dynamical interactions. In this paper, we formulate a generalized functional additive model (GFAM) and develop the associated penalized likelihood estimation methods for such a modeling problem. A GFAM consists of a set of basis functions convolving the input signals, and a link function generating the firing probability of the output neuron from the summation of the convolutions weighted by the sought model coefficients. Model sparsities are achieved by using various penalized likelihood estimations and basis functions. Specifically, we introduce two variations of the GFAM using a global basis (e.g., Laguerre basis) and group LASSO estimation, and a local basis (e.g., B-spline basis) and group bridge estimation, respectively. We further develop an optimization method based on quadratic approximation of the likelihood function for the estimation of these models. Simulation and experimental results show that both group-LASSO-Laguerre and group-bridge-B-spline can capture faithfully the global sparsities, while the latter can replicate accurately and simultaneously both global and local sparsities. The sparse models outperform the full models estimated with the standard maximum likelihood method in out-of-sample predictions. PMID:23674048

  7. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    Science.gov (United States)

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.

  8. Philosophy and phylogenetic inference: a comparison of likelihood and parsimony methods in the context of Karl Popper's writings on corroboration.

    Science.gov (United States)

    de Queiroz, K; Poe, S

    2001-06-01

    Advocates of cladistic parsimony methods have invoked the philosophy of Karl Popper in an attempt to argue for the superiority of those methods over phylogenetic methods based on Ronald Fisher's statistical principle of likelihood. We argue that the concept of likelihood in general, and its application to problems of phylogenetic inference in particular, are highly compatible with Popper's philosophy. Examination of Popper's writings reveals that his concept of corroboration is, in fact, based on likelihood. Moreover, because probabilistic assumptions are necessary for calculating the probabilities that define Popper's corroboration, likelihood methods of phylogenetic inference--with their explicit probabilistic basis--are easily reconciled with his concept. In contrast, cladistic parsimony methods, at least as described by certain advocates of those methods, are less easily reconciled with Popper's concept of corroboration. If those methods are interpreted as lacking probabilistic assumptions, then they are incompatible with corroboration. Conversely, if parsimony methods are to be considered compatible with corroboration, then they must be interpreted as carrying implicit probabilistic assumptions. Thus, the non-probabilistic interpretation of cladistic parsimony favored by some advocates of those methods is contradicted by an attempt by the same authors to justify parsimony methods in terms of Popper's concept of corroboration. In addition to being compatible with Popperian corroboration, the likelihood approach to phylogenetic inference permits researchers to test the assumptions of their analytical methods (models) in a way that is consistent with Popper's ideas about the provisional nature of background knowledge.

  9. Life Writing After Empire

    DEFF Research Database (Denmark)

    A watershed moment of the twentieth century, the end of empire saw upheavals to global power structures and national identities. However, decolonisation profoundly affected individual subjectivities too. Life Writing After Empire examines how people around the globe have made sense of the post......-imperial condition through the practice of life writing in its multifarious expressions, from auto/biography through travel writing to oral history and photography. Through interdisciplinary approaches that draw on literature and history alike, the contributors explore how we might approach these genres differently...... in order to understand how individual life writing reflects broader societal changes. From far-flung corners of the former British Empire, people have turned to life writing to manage painful or nostalgic memories, as well as to think about the past and future of the nation anew through the personal...

  10. Empirical philosophy of science

    DEFF Research Database (Denmark)

    2015-01-01

    A growing number of philosophers of science make use of qualitative empirical data, a development that may reconfigure the relations between philosophy and sociology of science and that is reminiscent of efforts to integrate history and philosophy of science. Therefore, the first part...... of this introduction to the volume Empirical Philosophy of Science outlines the history of relations between philosophy and sociology of science on the one hand, and philosophy and history of science on the other. The second part of this introduction offers an overview of the papers in the volume, each of which...... is giving its own answer to questions such as: Why does the use of qualitative empirical methods benefit philosophical accounts of science? And how should these methods be used by the philosopher?...

  11. A comparison of likelihood ratio tests and Rao's score test for three separable covariance matrix structures.

    Science.gov (United States)

    Filipiak, Katarzyna; Klein, Daniel; Roy, Anuradha

    2017-01-01

    The problem of testing the separability of a covariance matrix against an unstructured variance-covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first-order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ(2) distribution. The tests are implemented on a real dataset from medical studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Empirical correction of a toy climate model

    CERN Document Server

    Allgaier, Nicholas A; Danforth, Christopher M

    2011-01-01

    Improving the accuracy of forecast models for physical systems such as the atmosphere is a crucial ongoing effort. Errors in state estimation for these often highly nonlinear systems has been the primary focus of recent research, but as that error has been successfully diminished, the role of model error in forecast uncertainty has duly increased. The present study is an investigation of a particular empirical correction procedure that is of special interest because it considers the model a "black box", and therefore can be applied widely with little modification. The procedure involves the comparison of short model forecasts with a reference "truth" system during a training period in order to calculate systematic (1) state-independent model bias and (2) state-dependent error patterns. An estimate of the likelihood of the latter error component is computed from the current state at every timestep of model integration. The effectiveness of this technique is explored in two experiments: (1) a perfect model scen...

  13. Community detection in networks: Modularity optimization and maximum likelihood are equivalent

    CERN Document Server

    Newman, M E J

    2016-01-01

    We demonstrate an exact equivalence between two widely used methods of community detection in networks, the method of modularity maximization in its generalized form which incorporates a resolution parameter controlling the size of the communities discovered, and the method of maximum likelihood applied to the special case of the stochastic block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.

  14. Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.

  15. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  16. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  17. A likelihood method to cross-calibrate air-shower detectors

    Science.gov (United States)

    Dembinski, Hans Peter; Kégl, Balázs; Mariş, Ioana C.; Roth, Markus; Veberič, Darko

    2016-01-01

    We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated toy experiments, which mimic realistic experimental conditions.

  18. Framing the empirical findings on firm growth

    NARCIS (Netherlands)

    Capasso, M.; Cefis, E.; Sapio, S.

    2010-01-01

    This paper proposes a general framework to account for the divergent results in the empirical literature on the relation between firm sizes and growth rates, and on many results on growth autocorrelation. In particular, we provide an explanation for why traces of the LPE sometimes occur in condition

  19. Parallel Likelihood Function Evaluation on Heterogeneous Many-core Systems

    CERN Document Server

    Jarp, Sverre; Leduc, Julien; Nowak, Andrzej; Sneen Lindal, Yngve

    2011-01-01

    This paper describes a parallel implementation that allows the evaluations of the likelihood function for data analysis methods to run cooperatively on heterogeneous computational devices (i.e. CPU and GPU) belonging to a single computational node. The implementation is able to split and balance the workload needed for the evaluation of the likelihood function in corresponding sub-workloads to be executed in parallel on each computational device. The CPU parallelization is implemented using OpenMP, while the GPU implementation is based on OpenCL. The comparison of the performance of these implementations for different configurations and different hardware systems are reported. Tests are based on a real data analysis carried out in the high energy physics community.

  20. Measures of family resemblance for binary traits: likelihood based inference.

    Science.gov (United States)

    Shoukri, Mohamed M; ElDali, Abdelmoneim; Donner, Allan

    2012-07-24

    Detection and estimation of measures of familial aggregation is considered the first step to establish whether a certain disease has genetic component. Such measures are usually estimated from observational studies on siblings, parent-offspring, extended pedigrees or twins. When the trait of interest is quantitative (e.g. Blood pressures, body mass index, blood glucose levels, etc.) efficient likelihood estimation of such measures is feasible under the assumption of multivariate normality of the distributions of the traits. In this case the intra-class and inter-class correlations are used to assess the similarities among family members. When the trail is measured on the binary scale, we establish a full likelihood inference on such measures among siblings, parents, and parent-offspring. We illustrate the methodology on nuclear family data where the trait is the presence or absence of hypertension.

  1. Applications of the Likelihood Theory in Finance: Modelling and Pricing

    CERN Document Server

    Janssen, Arnold

    2012-01-01

    This paper discusses the connection between mathematical finance and statistical modelling which turns out to be more than a formal mathematical correspondence. We like to figure out how common results and notions in statistics and their meaning can be translated to the world of mathematical finance and vice versa. A lot of similarities can be expressed in terms of LeCam's theory for statistical experiments which is the theory of the behaviour of likelihood processes. For positive prices the arbitrage free financial assets fit into filtered experiments. It is shown that they are given by filtered likelihood ratio processes. From the statistical point of view, martingale measures, completeness and pricing formulas are revisited. The pricing formulas for various options are connected with the power functions of tests. For instance the Black-Scholes price of a European option has an interpretation as Bayes risk of a Neyman Pearson test. Under contiguity the convergence of financial experiments and option prices ...

  2. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  3. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  4. $\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs

    CERN Document Server

    van de Geer, Sara

    2012-01-01

    We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.

  5. A Weighted Likelihood Ratio of Two Related Negative Hypergeomeric Distributions

    Institute of Scientific and Technical Information of China (English)

    Titi Obilade

    2004-01-01

    In this paper we consider some related negative hypergeometric distributions arising from the problem of sampling without replacement from an urn containing balls of different colours and in different proportions but stopping only after some specifi number of balls of different colours have been obtained.With the aid of some simple recurrence relations and identities we obtain in the case of two colours the moments for the maximum negative hypergeometric distribution,the minimum negative hypergeometric distribution,the likelihood ratio negative hypergeometric distribution and consequently the likelihood proportional negative hypergeometric distributiuon.To the extent that the sampling scheme is applicable to modelling data as illustrated with a biological example and in fact many situations of estimating Bernoulli parameters for binary traits within afinite population,these are important first-step results.

  6. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Kenneth W. K. Lui

    2009-01-01

    Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.

  7. Bayesian experimental design for models with intractable likelihoods.

    Science.gov (United States)

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.

  8. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    Science.gov (United States)

    Lui, Kenneth W. K.; So, H. C.

    2009-12-01

    We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.

  9. A composite likelihood approach for spatially correlated survival data

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  10. Maximum likelihood method and Fisher's information in physics and econophysics

    CERN Document Server

    Syska, Jacek

    2012-01-01

    Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...

  11. Maximum-likelihood estimation prevents unphysical Mueller matrices

    CERN Document Server

    Aiello, A; Voigt, D; Woerdman, J P

    2005-01-01

    We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.

  12. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... in an example concerning minke whales in the North Atlantic. Our modelling and computational approach is flexible but demanding in terms of computing time....

  13. Improved Likelihood Function in Particle-based IR Eye Tracking

    DEFF Research Database (Denmark)

    Satria, R.; Sorensen, J.; Hammoud, R.

    2005-01-01

    In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... performance in challenging sequences with test subjects showing large head movements and under significant light conditions....

  14. Australian food life style segments and elaboration likelihood differences

    DEFF Research Database (Denmark)

    Brunsø, Karen; Reid, Mike

    As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... insights into cross-cultural similarities and differences, into elaboration likelihood differences among consumer segments, and show how the involvement construct may be used as basis for communication development....

  15. Maximizing Friend-Making Likelihood for Social Activity Organization

    Science.gov (United States)

    2015-05-22

    the interplay of the group size, the constraint on existing friendships and the objective function on the likelihood of friend making. We prove that...social networks (OSNs), e.g., Facebook , Meetup, and Skout1, more and more people initiate friend gatherings or group activities via these OSNs. For...example, more than 16 millions of events are created on Facebook each month to organize various kinds of activities2, and more than 500 thousands of face

  16. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    OpenAIRE

    2009-01-01

    We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...

  17. Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels

    OpenAIRE

    2015-01-01

    The space-time whitened matched filter (ST-WMF) maximum likelihood sequence detection (MLSD) architecture has been recently proposed (Maggio et al., 2014). Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i) drastically reduces the number of states of the Viterbi decoder (VD) and (ii) offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We...

  18. An Empirical Kaiser Criterion.

    Science.gov (United States)

    Braeken, Johan; van Assen, Marcel A L M

    2016-03-31

    In exploratory factor analysis (EFA), most popular methods for dimensionality assessment such as the screeplot, the Kaiser criterion, or-the current gold standard-parallel analysis, are based on eigenvalues of the correlation matrix. To further understanding and development of factor retention methods, results on population and sample eigenvalue distributions are introduced based on random matrix theory and Monte Carlo simulations. These results are used to develop a new factor retention method, the Empirical Kaiser Criterion. The performance of the Empirical Kaiser Criterion and parallel analysis is examined in typical research settings, with multiple scales that are desired to be relatively short, but still reliable. Theoretical and simulation results illustrate that the new Empirical Kaiser Criterion performs as well as parallel analysis in typical research settings with uncorrelated scales, but much better when scales are both correlated and short. We conclude that the Empirical Kaiser Criterion is a powerful and promising factor retention method, because it is based on distribution theory of eigenvalues, shows good performance, is easily visualized and computed, and is useful for power analysis and sample size planning for EFA. (PsycINFO Database Record

  19. Empirical Music Aesthetics

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    musical performance and reception is inspired by traditional approaches within aesthetics, but it also challenges some of the presuppositions inherent in them. As an example of such work I present a research project in empirical music aesthetics begun last year and of which I am a team member....

  20. Trade and Empire

    DEFF Research Database (Denmark)

    Bang, Peter Fibiger

    2007-01-01

    This articles seeks to establish a new set of organizing concepts for the analysis of the Roman imperial economy from Republic to late antiquity: tributary empire, port-folio capitalism and protection costs. Together these concepts explain better economic developments in the Roman world than the...

  1. Empirical Discovery in Linguistics

    CERN Document Server

    Pericliev, V

    1995-01-01

    A discovery system for detecting correspondences in data is described, based on the familiar induction methods of J. S. Mill. Given a set of observations, the system induces the ``causally'' related facts in these observations. Its application to empirical linguistic discovery is described.

  2. Essays in empirical banking

    NARCIS (Netherlands)

    Bai, Y.

    2015-01-01

    This dissertation consists of three essays on empirical banking. They study how do information and political activeness affect banks’ lending behavior, as well as the effect of lending relationship with banks on firms’ stock performance during interbank liquidity crunch. The first essay looks at a t

  3. Trade and Empire

    DEFF Research Database (Denmark)

    Bang, Peter Fibiger

    2007-01-01

    This articles seeks to establish a new set of organizing concepts for the analysis of the Roman imperial economy from Republic to late antiquity: tributary empire, port-folio capitalism and protection costs. Together these concepts explain better economic developments in the Roman world than the ...

  4. Empirical Music Aesthetics

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    musical performance and reception is inspired by traditional approaches within aesthetics, but it also challenges some of the presuppositions inherent in them. As an example of such work I present a research project in empirical music aesthetics begun last year and of which I am a team member....

  5. Empirical social choice

    DEFF Research Database (Denmark)

    Kurrild-Klitgaard, Peter

    2014-01-01

    applications. Special attention is given to three phenomena and their possible empirical manifestations: The instability of social choice in the form of (1) the possibility of majority cycles, (2) the non-robustness of social choices given alternative voting methods, and (3) the possibility of various forms...

  6. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  7. Fertilization response likelihood for the interpretation of leaf analyses

    Directory of Open Access Journals (Sweden)

    Celsemy Eleutério Maia

    2012-04-01

    Full Text Available Leaf analysis is the chemical evaluation of the nutritional status where the nutrient concentrations found in the tissue reflect the nutritional status of the plants. Thus, a correct interpretation of the results of leaf analysis is fundamental for an effective use of this tool. The purpose of this study was to propose and compare the method of Fertilization Response Likelihood (FRL for interpretation of leaf analysis with that of the Diagnosis and Recommendation Integrated System (DRIS. The database consisted of 157 analyses of the N, P, K, Ca, Mg, S, Cu, Fe, Mn, Zn, and B concentrations in coffee leaves, which were divided into two groups: low yield ( 30 bags ha-1. The DRIS indices were calculated using the method proposed by Jones (1981. The fertilization response likelihood was computed based on the approximation of normal distribution. It was found that the Fertilization Response Likelihood (FRL allowed an evaluation of the nutritional status of coffee trees, coinciding with the DRIS-based diagnoses in 84.96 % of the crops.

  8. CMB likelihood approximation by a Gaussianized Blackwell-Rao estimator

    CERN Document Server

    Rudjord, Ø; Eriksen, H K; Huey, Greg; Górski, K M; Jewell, J B

    2008-01-01

    We introduce a new CMB temperature likelihood approximation called the Gaussianized Blackwell-Rao (GBR) estimator. This estimator is derived by transforming the observed marginal power spectrum distributions obtained by the CMB Gibbs sampler into standard univariate Gaussians, and then approximate their joint transformed distribution by a multivariate Gaussian. The method is exact for full-sky coverage and uniform noise, and an excellent approximation for sky cuts and scanning patterns relevant for modern satellite experiments such as WMAP and Planck. A single evaluation of this estimator between l=2 and 200 takes ~0.2 CPU milliseconds, while for comparison, a single pixel space likelihood evaluation between l=2 and 30 for a map with ~2500 pixels requires ~20 seconds. We apply this tool to the 5-year WMAP temperature data, and re-estimate the angular temperature power spectrum, $C_{\\ell}$, and likelihood, L(C_l), for l<=200, and derive new cosmological parameters for the standard six-parameter LambdaCDM mo...

  9. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order d-b; that is, there exist vectors β for which β......′X_{t} is fractional of order d-b. The parameters d and b satisfy either d≥b≥1/2, d=b≥1/2, or d=d_{0}≥b≥1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2≤b≤d≤d_{1} for any d_{1}≥d_{0}. To this end, we consider the conditional likelihood as a stochastic...... process in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of β is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We...

  10. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß......'X(t) is fractional of order d-b. The parameters d and b satisfy either d=b=1/2, d=b=1/2, or d=d0=b=1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2=b=d=d1 for any d1=d0. To this end, we consider the conditional likelihood as a stochastic process...... in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of ß is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We also find...

  11. Local solutions of Maximum Likelihood Estimation in Quantum State Tomography

    CERN Document Server

    Gonçalves, Douglas S; Lavor, Carlile; Farías, Osvaldo Jiménez; Ribeiro, P H Souto

    2011-01-01

    Maximum likelihood estimation is one of the most used methods in quantum state tomography, where the aim is to find the best density matrix for the description of a physical system. Results of measurements on the system should match the expected values produced by the density matrix. In some cases however, if the matrix is parameterized to ensure positivity and unit trace, the negative log-likelihood function may have several local minima. In several papers in the field, authors associate a source of errors to the possibility that most of these local minima are not global, so that optimization methods can be trapped in the wrong minimum, leading to a wrong density matrix. Here we show that, for convex negative log-likelihood functions, all local minima are global. We also show that a practical source of errors is in fact the use of optimization methods that do not have global convergence property or present numerical instabilities. The clarification of this point has important repercussion on quantum informat...

  12. Accurate determination of phase arrival times using autoregressive likelihood estimation

    Directory of Open Access Journals (Sweden)

    G. Kvaerna

    1994-06-01

    Full Text Available We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation of about 0.05 s for P-phases and 0.15 0.20 s for S phases. These accuracies are as good as for analyst picks, and are considerably better than the accuracies of the current onset procedure used for processing of regional array data at NORSAR. In another application, we have developed a generic procedure to reestimate the onsets of all types of first arriving P phases. By again applying the autoregressive likelihood technique, we have obtained automatic onset times of a quality such that 70% of the automatic picks are within 0.1 s of the best manual pick. For the onset time procedure currently used at NORSAR, the corresponding number is 28%. Clearly, automatic reestimation of first arriving P onsets using the autoregressive likelihood technique has the potential of significantly reducing the retiming efforts of the analyst.

  13. Maximum likelihood tuning of a vehicle motion filter

    Science.gov (United States)

    Trankle, Thomas L.; Rabin, Uri H.

    1990-01-01

    This paper describes the use of maximum likelihood parameter estimation unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the likelihood functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the likelihood function of the data in order to estimate the unknown parameters. The parameter estimation algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.

  14. Improving on the empirical covariance matrix using truncated PCA with white noise residuals

    CERN Document Server

    Jewson, S

    2005-01-01

    The empirical covariance matrix is not necessarily the best estimator for the population covariance matrix: we describe a simple method which gives better estimates in two examples. The method models the covariance matrix using truncated PCA with white noise residuals. Jack-knife cross-validation is used to find the truncation that maximises the out-of-sample likelihood score.

  15. A maximum likelihood method for studying gene-environment interactions under conditional independence of genotype and exposure.

    Science.gov (United States)

    Cheng, K F

    2006-09-30

    Given the biomedical interest in gene-environment interactions along with the difficulties inherent in gathering genetic data from controls, epidemiologists need methodologies that can increase precision of estimating interactions while minimizing the genotyping of controls. To achieve this purpose, many epidemiologists suggested that one can use case-only design. In this paper, we present a maximum likelihood method for making inference about gene-environment interactions using case-only data. The probability of disease development is described by a logistic risk model. Thus the interactions are model parameters measuring the departure of joint effects of exposure and genotype from multiplicative odds ratios. We extend the typical inference method derived under the assumption of independence between genotype and exposure to that under a more general assumption of conditional independence. Our maximum likelihood method can be applied to analyse both categorical and continuous environmental factors, and generalized to make inference about gene-gene-environment interactions. Moreover, the application of this method can be reduced to simply fitting a multinomial logistic model when we have case-only data. As a consequence, the maximum likelihood estimates of interactions and likelihood ratio tests for hypotheses concerning interactions can be easily computed. The methodology is illustrated through an example based on a study about the joint effects of XRCC1 polymorphisms and smoking on bladder cancer. We also give two simulation studies to show that the proposed method is reliable in finite sample situation.

  16. Communicating likelihoods and probabilities in forecasts of volcanic eruptions

    Science.gov (United States)

    Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas

    2014-02-01

    The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in

  17. Principles of Empirically Supported Interventions Applied to Anger Management

    Science.gov (United States)

    Deffenbacher, Jerry L.; Oetting, Eugene R.; DiGiuseppe, Raymond A.

    2002-01-01

    This article applies the Principles of Empirically Supported Interventions (PESI) in counseling psychology to anger management with adults. The review suggests that there is empirical support for cognitive-behavioral interventions generally and for four specific interventions (relaxation, cognitive, behavioral skill enhancement, and combinations…

  18. empire as material setting and heuristic grid for new testament ...

    African Journals Online (AJOL)

    2010-06-17

    Jun 17, 2010 ... Using postcolonial analysis to account for the Roman Empire's pervasive presence in and influence ... The study of the Roman Empire, as far as its impact on .... discussing postcolonial and various types of resistance literature, given the ... force of generally well-trained and well-resourced legions, which.

  19. Goodness! The empirical turn in health care ethics

    NARCIS (Netherlands)

    Willems, D.; Pols, J.

    2010-01-01

    This paper is intended to encourage scholars to submit papers for a symposium and the next special issue of Medische Antropologie which will be on empirical studies of normative questions. We describe the ‘empirical turn’ in medical ethics. Medical ethics and bioethics in general have witnessed a mo

  20. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  1. Unconditional efficient one-sided confidence limits for the odds ratio based on conditional likelihood.

    Science.gov (United States)

    Lloyd, Chris J; Moldovan, Max V

    2007-12-10

    We compare various one-sided confidence limits for the odds ratio in a 2 x 2 table. The first group of limits relies on first-order asymptotic approximations and includes limits based on the (signed) likelihood ratio, score and Wald statistics. The second group of limits is based on the conditional tilted hypergeometric distribution, with and without mid-P correction. All these limits have poor unconditional coverage properties and so we apply the general transformation of Buehler (J. Am. Statist. Assoc. 1957; 52:482-493) to obtain limits which are unconditionally exact. The performance of these competing exact limits is assessed across a range of sample sizes and parameter values by looking at their mean size. The results indicate that Buehler limits generated from the conditional likelihood have the best performance, with a slight preference for the mid-P version. This confidence limit has not been proposed before and is recommended for general use, especially when the underlying probabilities are not extreme.

  2. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    Directory of Open Access Journals (Sweden)

    Kirkpatrick Mark

    2005-01-01

    Full Text Available Abstract Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1/2 to m(2k - m + 1/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given.

  3. Integrating biological knowledge into variable selection: an empirical Bayes approach with an application in cancer biology.

    Science.gov (United States)

    Hill, Steven M; Neve, Richard M; Bayani, Nora; Kuo, Wen-Lin; Ziyad, Safiyyah; Spellman, Paul T; Gray, Joe W; Mukherjee, Sach

    2012-05-11

    An important question in the analysis of biochemical data is that of identifying subsets of molecular variables that may jointly influence a biological response. Statistical variable selection methods have been widely used for this purpose. In many settings, it may be important to incorporate ancillary biological information concerning the variables of interest. Pathway and network maps are one example of a source of such information. However, although ancillary information is increasingly available, it is not always clear how it should be used nor how it should be weighted in relation to primary data. We put forward an approach in which biological knowledge is incorporated using informative prior distributions over variable subsets, with prior information selected and weighted in an automated, objective manner using an empirical Bayes formulation. We employ continuous, linear models with interaction terms and exploit biochemically-motivated sparsity constraints to permit exact inference. We show an example of priors for pathway- and network-based information and illustrate our proposed method on both synthetic response data and by an application to cancer drug response data. Comparisons are also made to alternative Bayesian and frequentist penalised-likelihood methods for incorporating network-based information. The empirical Bayes method proposed here can aid prior elicitation for Bayesian variable selection studies and help to guard against mis-specification of priors. Empirical Bayes, together with the proposed pathway-based priors, results in an approach with a competitive variable selection performance. In addition, the overall procedure is fast, deterministic, and has very few user-set parameters, yet is capable of capturing interplay between molecular players. The approach presented is general and readily applicable in any setting with multiple sources of biological prior knowledge.

  4. Relative efficiency appraisal algorithms using small-scale through empirically tailored of discrete choice modeling maximum likelihood estimator computing environment

    National Research Council Canada - National Science Library

    Hyuk-Jae Roh Prasanta K. Sahu Ata M. Khan Satish Sharma

    2015-01-01

    ...., where the model estimation is usually carried out by using commercial software. Nonetheless, tailored computer codes offer modellers greater flexibility and control of unique modelling situation...

  5. Empirical codon substitution matrix

    Directory of Open Access Journals (Sweden)

    Gonnet Gaston H

    2005-06-01

    Full Text Available Abstract Background Codon substitution probabilities are used in many types of molecular evolution studies such as determining Ka/Ks ratios, creating ancestral DNA sequences or aligning coding DNA. Until the recent dramatic increase in genomic data enabled construction of empirical matrices, researchers relied on parameterized models of codon evolution. Here we present the first empirical codon substitution matrix entirely built from alignments of coding sequences from vertebrate DNA and thus provide an alternative to parameterized models of codon evolution. Results A set of 17,502 alignments of orthologous sequences from five vertebrate genomes yielded 8.3 million aligned codons from which the number of substitutions between codons were counted. From this data, both a probability matrix and a matrix of similarity scores were computed. They are 64 × 64 matrices describing the substitutions between all codons. Substitutions from sense codons to stop codons are not considered, resulting in block diagonal matrices consisting of 61 × 61 entries for the sense codons and 3 × 3 entries for the stop codons. Conclusion The amount of genomic data currently available allowed for the construction of an empirical codon substitution matrix. However, more sequence data is still needed to construct matrices from different subsets of DNA, specific to kingdoms, evolutionary distance or different amount of synonymous change. Codon mutation matrices have advantages for alignments up to medium evolutionary distances and for usages that require DNA such as ancestral reconstruction of DNA sequences and the calculation of Ka/Ks ratios.

  6. 广义Lorenz曲线的非参数统计推断%Non-parametric inferences for the generalized Lorenz curve

    Institute of Scientific and Technical Information of China (English)

    杨宝莹; 秦更生; BELINGA-HILL Nelly E.

    2012-01-01

    本文讨论了广义Lorenz曲线的经验似然统计推断.在简单随机抽样、分层随机抽样和整群随机抽样下,本文分别定义了广义Lorenz坐标的profile经验似然比统计量,得出这些经验似然比的极限分布为带系数的自由度为1的x2分布.对于整个Lorenz曲线,基于经验似然方法类似地得出相应的极限过程.根据所得的经验似然理论,本文给出了bootstrap经验似然置信区间构造方法,并通过数据模拟,对新给出的广义Lorenz坐标的bootstrap经验似然置信区间与渐近正态置信区间以及bootstrap置信区间等进行了对比研究.对整个Lorenz曲线,基于经验似然方法对其置信域也进行了模拟研究.最后我们将所推荐的置信区间应用到实例中.%In this paper, we discuss the empirical likelihood-based inferences for the generalized Lorenz (GL) curve. In the settings of simple random sampling, stratified random sampling and cluster random sampling, it is shown that the limiting distributions of the empirical likelihood ratio statistics for the GL ordinate are the scaled x2 distributions with one degree of freedom. We also derive the limiting processes of the associated empirical likelihood-based GL processes. Various confidence intervals for the GL ordinate are proposed based on bootstrap method and the newly developed empirical likelihood theory. Extensive simulation studies are conducted to compare the relative performances of various confidence intervals for GL ordinates in terms of coverage probability and average interval length. The finite sample performances of the empirical likelihood-based confidence bands are also illustrated in simulation studies. Finally, a real example is used to illustrate the application of the recommended intervals.

  7. Early Course in Obstetrics Increases Likelihood of Practice Including Obstetrics.

    Science.gov (United States)

    Pearson, Jennifer; Westra, Ruth

    2016-10-01

    The Department of Family Medicine and Community Health Duluth has offered the Obstetrical Longitudinal Course (OBLC) as an elective for first-year medical students since 1999. The objective of the OBLC Impact Survey was to assess the effectiveness of the course over the past 15 years. A Qualtrics survey was emailed to participants enrolled in the course from 1999-2014. Data was compiled for the respondent group as a whole as well as four cohorts based on current level of training/practice. Cross-tabulations with Fisher's exact test were applied and odds ratios calculated for factors affecting likelihood of eventual practice including obstetrics. Participation in the OBLC was successful in increasing exposure, awareness, and comfort in caring for obstetrical patients and feeling more prepared for the OB-GYN Clerkship. A total of 50.5% of course participants felt the OBLC influenced their choice of specialty. For participants who are currently physicians, 51% are practicing family medicine with obstetrics or OB-GYN. Of the cohort of family physicians, 65.2% made the decision whether to include obstetrics in practice during medical school. Odds ratios show the likelihood of practicing obstetrics is higher when participants have completed the OBLC and also are practicing in a rural community. Early exposure to obstetrics, as provided by the OBLC, appears to increase the likelihood of including obstetrics in practice, especially if eventual practice is in a rural community. This course may be a tool to help create a pipeline for future rural family physicians providing obstetrical care.

  8. Maximum likelihood q-estimator reveals nonextensivity regulated by extracellular potassium in the mammalian neuromuscular junction

    CERN Document Server

    da Silva, A J; Santos, D O C; Lima, R F

    2013-01-01

    Recently, we demonstrated the existence of nonextensivity in neuromuscular transmission [Phys. Rev. E 84, 041925 (2011)]. In the present letter, we propose a general criterion based on the q-calculus foundations and nonextensive statistics to estimate the values for both scale factor and q-index using the maximum likelihood q-estimation method (MLqE). We next applied our theoretical findings to electrophysiological recordings from neuromuscular junction (NMJ) where spontaneous miniature end plate potentials (MEPP) were analyzed. These calculations were performed in both normal and high extracellular potassium concentration, [K+]o. This protocol was assumed to test the validity of the q-index in electrophysiological conditions closely resembling physiological stimuli. Surprisingly, the analysis showed a significant difference between the q-index in high and normal [K+]o, where the magnitude of nonextensivity was increased. Our letter provides a general way to obtain the best q-index from the q-Gaussian distrib...

  9. Searching for degenerate Higgs bosons using a profile likelihood ratio method

    CERN Document Server

    Heikkilä, Jaana

    ATLAS and CMS collaborations at the Large Hadron Collider have observed a new resonance con- sistent with the standard model Higgs boson. However, it has been suggested that the observed signal could also be produced by multiple nearly mass-degenerate states that couple differently to the standard model particles. In this work, a method to discriminate between the hypothesis of a single Higgs boson and that of multiple mass-degenerate Higgs bosons was developed. Using the matrix of measured signal strengths in different production and decay modes, parametrizations for the two hypotheses were constructed as a general rank 1 matrix and the most general $5 \\times 4$ matrix, respectively. The test statistic was defined as a ratio of profile likelihoods for the two hypotheses. The method was applied to the CMS measurements. The expected test statistic distribution was estimated twice by generating pseudo-experiments according to both the standard model hypothesis and the single Higgs boson hypothesis best fitting...

  10. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  11. Efficient maximum likelihood parameterization of continuous-time Markov processes

    CERN Document Server

    McGibbon, Robert T

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.

  12. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...

  13. Maximum Likelihood Localization of Radiation Sources with unknown Source Intensity

    CERN Document Server

    Baidoo-Williams, Henry E

    2016-01-01

    In this paper, we consider a novel and robust maximum likelihood approach to localizing radiation sources with unknown statistics of the source signal strength. The result utilizes the smallest number of sensors required theoretically to localize the source. It is shown, that should the source lie in the open convex hull of the sensors, precisely $N+1$ are required in $\\mathbb{R}^N, ~N \\in \\{1,\\cdots,3\\}$. It is further shown that the region of interest, the open convex hull of the sensors, is entirely devoid of false stationary points. An augmented gradient ascent algorithm with random projections should an estimate escape the convex hull is presented.

  14. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.

  15. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  16. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  17. AN EFFICIENT APPROXIMATE MAXIMUM LIKELIHOOD SIGNAL DETECTION FOR MIMO SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Cao Xuehong

    2007-01-01

    This paper proposes an efficient approximate Maximum Likelihood (ML) detection method for Multiple-Input Multiple-Output (MIMO) systems, which searches local area instead of exhaustive search and selects valid search points in each transmit antenna signal constellation instead of all hyperplane. Both of the selection and search complexity can be reduced significantly. The method performs the tradeoff between computational complexity and system performance by adjusting the neighborhood size to select the valid search points. Simulation results show that the performance is comparable to that of the ML detection while the complexity is only as the small fraction of ML.

  18. Maximum likelihood characterization of rotationally symmetric distributions on the sphere

    OpenAIRE

    Duerinckx, Mitia; Ley, Christophe

    2012-01-01

    A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...

  19. Maximum-likelihood analysis of the COBE angular correlation function

    Science.gov (United States)

    Seljak, Uros; Bertschinger, Edmund

    1993-01-01

    We have used maximum-likelihood estimation to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 estimate of Smoot et al.

  20. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.