WorldWideScience

Sample records for likelihood based comparison

  1. Maximum Likelihood based comparison of the specific growth rates for P. aeruginosa and four mutator strains

    DEFF Research Database (Denmark)

    Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard

    2008-01-01

    that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...... that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables......The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements...

  2. Maximum Likelihood based comparison of the specific growth rates for P. aeruginosa and four mutator strains

    DEFF Research Database (Denmark)

    Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard

    2008-01-01

    with an exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model...... are used for parameter estimation. The data is log-transformed such that a linear model can be applied. The transformation changes the variance structure, and hence an OD-dependent variance is implemented in the model. The autocorrelation in the data is demonstrated, and a correlation model...... that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...

  3. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-02-09

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE1 and MLE2, respectively), and Greenwood approximation (MLEgw) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE1, the MLE2 and MLEgw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE2 and MLEgw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE2 and MLEgw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization.

  4. Evaluating score- and feature-based likelihood ratio models for multivariate continuous data: applied to forensic MDMA comparison

    NARCIS (Netherlands)

    A. Bolck; H. Ni; M. Lopatka

    2015-01-01

    Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are emp

  5. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case. The s...

  6. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  7. Empirical Likelihood-Based ANOVA for Trimmed Means

    Science.gov (United States)

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  8. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  9. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision

    Directory of Open Access Journals (Sweden)

    L. Balaji

    2015-01-01

    Full Text Available H.264 Advanced Video Coding (AVC was prolonged to Scalable Video Coding (SVC. SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.

  10. Modified maximum likelihood registration based on information fusion

    Institute of Scientific and Technical Information of China (English)

    Yongqing Qi; Zhongliang Jing; Shiqiang Hu

    2007-01-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  11. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... is implemented using markov chain Monte Carlo (MCMC) methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference are compared...

  12. GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  13. GPU accelerated likelihoods for stereo-based articulated tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    2010-01-01

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  14. Trimmed Likelihood-based Estimation in Binary Regression Models

    NARCIS (Netherlands)

    Cizek, P.

    2005-01-01

    The binary-choice regression models such as probit and logit are typically estimated by the maximum likelihood method.To improve its robustness, various M-estimation based procedures were proposed, which however require bias corrections to achieve consistency and their resistance to outliers is rela

  15. Likelihood free inference for Markov processes: a comparison.

    Science.gov (United States)

    Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S

    2015-04-01

    Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space.

  16. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... in an example concerning minke whales in the North Atlantic. Our modelling and computational approach is flexible but demanding in terms of computing time....

  17. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  18. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    Yi DAI; Zhao-jun WANG; Chang-liang ZOU

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method.Sullivan and woodall pointed out the test statistic lrt (n1, n2) is approximately distributed as x2 (2) as the sample size n, n1 and n2 are very large, and the value of n1 = 2, 3,..., n- 2 and that of n2 = n- n1.So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained.In addition, the properties of the standardized likelihood ratio statistic slr(n1,n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i ≠ n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both.Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  19. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  20. Nonparametric likelihood based estimation of linear filters for point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2015-01-01

    result is a representation of the gradient of the log-likelihood, which we use to derive computable approximations of the log-likelihood and the gradient by time discretization. These approximations are then used to minimize the approximate penalized log-likelihood. For time and memory efficiency...

  1. Likelihood-based CT reconstruction of objects containing known components

    Energy Technology Data Exchange (ETDEWEB)

    Stayman, J. Webster [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Biomedical Engineering; Otake, Yoshito; Uneri, Ali; Prince, Jerry L.; Siewerdsen, Jeffrey H.

    2011-07-01

    There are many situations in medical imaging where there are known components within the imaging volume. Such is the case in diagnostic X-ray CT imaging of patients with implants, in intraoperative CT imaging where there may be surgical tools in the field, or in situations where the patient support (table or frame) or other devices are outside the (truncated) reconstruction FOV. In such scenarios it is often of great interest to image the relation between the known component and the surrounding anatomy, or to provide high-quality images at the boundary of these objects, or simply to minimize artifacts arising from such components. We propose a framework for simultaneously estimating the position and orientation of a known component and the surrounding volume. Toward this end, we adopt a likelihood-based objective function with an image volume jointly parameterized by a known object, or objects, with unknown registration parameters and an unknown background attenuation volume. The objective is solved iteratively using an alternating minimization approach between the two parameter types. Because this model integrates a substantial amount of prior knowledge about the overall volume, we expect a number of advantages including the reduction of metal artifacts, potential for more sparse data acquisition (decreased time and dose), and/or improved image quality. We illustrate this approach using simulated spine CT data that contains pedicle screws placed in a vertebra, and demonstrate improved performance over traditional filtered-backprojection and penalized-likelihood reconstruction techniques. (orig.)

  2. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  3. Robust Likelihood-Based Survival Modeling with Microarray Data

    Directory of Open Access Journals (Sweden)

    HyungJun Cho

    2008-09-01

    Full Text Available Gene expression data can be associated with various clinical outcomes. In particular, these data can be of importance in discovering survival-associated genes for medical applications. As alternatives to traditional statistical methods, sophisticated methods and software programs have been developed to overcome the high-dimensional difficulty of microarray data. Nevertheless, new algorithms and software programs are needed to include practical functions such as the discovery of multiple sets of survival-associated genes and the incorporation of risk factors, and to use in the R environment which many statisticians are familiar with. For survival modeling with microarray data, we have developed a software program (called rbsurv which can be used conveniently and interactively in the R environment. This program selects survival-associated genes based on the partial likelihood of the Cox model and separates training and validation sets of samples for robustness. It can discover multiple sets of genes by iterative forward selection rather than one large set of genes. It can also allow adjustment for risk factors in microarray survival modeling. This software package, the rbsurv package, can be used to discover survival-associated genes with microarray data conveniently.

  4. Asperity-based earthquake likelihood models for Italy

    Directory of Open Access Journals (Sweden)

    Danijel Schorlemmer

    2010-11-01

    Full Text Available The Asperity Likelihood Model (ALM hypothesizes that small-scale spatial variations in the b-value of the Gutenberg-Richter relationship have a central role in forecasting future seismicity. The physical basis of the ALM is the concept that the local b-value is inversely dependent on the applied shear stress. Thus low b-values (b <0.7 characterize locked patches of faults, or asperities, from which future mainshocks are more likely to be generated, whereas high b-values (b >1.1, which can be found, for example, in creeping sections of faults, suggest a lower probability of large events. To turn this hypothesis into a forecast model for Italy, we first determined the regional b-value (b = 0.93 ±0.01 and compared it with the locally determined b-values at each node of the forecast grid, based on sampling radii ranging from 6 km to 20 km. We used the local b-values if their Akaike Information Criterion scores were lower than those of the regional b-values. We then explored two modifications to this model: in the ALM.IT, we declustered the input catalog for M ≥2 and smoothed the node-wise rates of the declustered catalog with a Gaussian filter. Completeness values for each node were determined using the probability-based magnitude of completeness method. In the second model, the hybrid ALM (HALM, as a «hybrid» between a grid-based and a zoning model, the Italian territory was divided into eight distinct regions that depended on the main tectonic regimes, and the local b-value variability was thus mapped using the regional b-values for each tectonic zone.

  5. A likelihood and resampling based approach to dichotomizing a continuous biomarker in medical research.

    Science.gov (United States)

    Su, Min; Fang, Liang; Su, Zheng

    2013-05-01

    Dichotomizing a continuous biomarker is a common practice in medical research. Various methods exist in the literature for dichotomizing continuous biomarkers. The most widely adopted minimum p-value approach uses a sequence of test statistics for all possible dichotomizations of a continuous biomarker, and it chooses the cutpoint that is associated with the maximum test statistic, or equivalently, the minimum p-value of the test. We herein propose a likelihood and resampling-based approach to dichotomizing a continuous biomarker. In this approach, the cutpoint is considered as an unknown variable in addition to the unknown outcome variables, and the likelihood function is maximized with respect to the cutpoint variable as well as the outcome variables to obtain the optimal cutpoint for the continuous biomarker. The significance level of the test for whether a cutpoint exists is assessed via a permutation test using the maximum likelihood values calculated based on the original as well as the permutated data sets. Numerical comparisons of the proposed approach and the minimum p-value approach showed that the proposed approach was not only more powerful in detecting the cutpoint but also provided markedly more accurate estimates of the cutpoint than the minimum p-value approach in all the simulation scenarios considered.

  6. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  7. Empirical likelihood-based evaluations of Value at Risk models

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.

  8. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to inference for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  9. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to infer- ence for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the pa- rameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  10. An Efficient Frequency Recognition Method Based on Likelihood Ratio Test for SSVEP-Based BCI

    Directory of Open Access Journals (Sweden)

    Yangsong Zhang

    2014-01-01

    Full Text Available An efficient frequency recognition method is very important for SSVEP-based BCI systems to improve the information transfer rate (ITR. To address this aspect, for the first time, likelihood ratio test (LRT was utilized to propose a novel multichannel frequency recognition method for SSVEP data. The essence of this new method is to calculate the association between multichannel EEG signals and the reference signals which were constructed according to the stimulus frequency with LRT. For the simulation and real SSVEP data, the proposed method yielded higher recognition accuracy with shorter time window length and was more robust against noise in comparison with the popular canonical correlation analysis- (CCA- based method and the least absolute shrinkage and selection operator- (LASSO- based method. The recognition accuracy and information transfer rate (ITR obtained by the proposed method was higher than those of the CCA-based method and LASSO-based method. The superior results indicate that the LRT method is a promising candidate for reliable frequency recognition in future SSVEP-BCI.

  11. Measures of family resemblance for binary traits: likelihood based inference.

    Science.gov (United States)

    Shoukri, Mohamed M; ElDali, Abdelmoneim; Donner, Allan

    2012-07-24

    Detection and estimation of measures of familial aggregation is considered the first step to establish whether a certain disease has genetic component. Such measures are usually estimated from observational studies on siblings, parent-offspring, extended pedigrees or twins. When the trait of interest is quantitative (e.g. Blood pressures, body mass index, blood glucose levels, etc.) efficient likelihood estimation of such measures is feasible under the assumption of multivariate normality of the distributions of the traits. In this case the intra-class and inter-class correlations are used to assess the similarities among family members. When the trail is measured on the binary scale, we establish a full likelihood inference on such measures among siblings, parents, and parent-offspring. We illustrate the methodology on nuclear family data where the trait is the presence or absence of hypertension.

  12. Improved Likelihood Function in Particle-based IR Eye Tracking

    DEFF Research Database (Denmark)

    Satria, R.; Sorensen, J.; Hammoud, R.

    2005-01-01

    In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... performance in challenging sequences with test subjects showing large head movements and under significant light conditions....

  13. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  14. Discriminative likelihood score weighting based on acoustic-phonetic classification for speaker identification

    Science.gov (United States)

    Suh, Youngjoo; Kim, Hoirin

    2014-12-01

    In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification. The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification. Experiments performed on the Aurora noise-corrupted TIMIT database showed that the proposed approach provides meaningful performance improvement with an overall relative error reduction of 15.8% over the maximum likelihood-based baseline GMM approach.

  15. Likelihood-based association analysis for nuclear families and unrelated subjects with missing genotype data.

    Science.gov (United States)

    Dudbridge, Frank

    2008-01-01

    Missing data occur in genetic association studies for several reasons including missing family members and uncertain haplotype phase. Maximum likelihood is a commonly used approach to accommodate missing data, but it can be difficult to apply to family-based association studies, because of possible loss of robustness to confounding by population stratification. Here a novel likelihood for nuclear families is proposed, in which distinct sets of association parameters are used to model the parental genotypes and the offspring genotypes. This approach is robust to population structure when the data are complete, and has only minor loss of robustness when there are missing data. It also allows a novel conditioning step that gives valid analysis for multiple offspring in the presence of linkage. Unrelated subjects are included by regarding them as the children of two missing parents. Simulations and theory indicate similar operating characteristics to TRANSMIT, but with no bias with missing data in the presence of linkage. In comparison with FBAT and PCPH, the proposed model is slightly less robust to population structure but has greater power to detect strong effects. In comparison to APL and MITDT, the model is more robust to stratification and can accommodate sibships of any size. The methods are implemented for binary and continuous traits in software, UNPHASED, available from the author.

  16. Marginal likelihood estimate comparisons to obtain optimal species delimitations in Silene sect. Cryptoneurae (Caryophyllaceae.

    Directory of Open Access Journals (Sweden)

    Zeynep Aydin

    Full Text Available Coalescent-based inference of phylogenetic relationships among species takes into account gene tree incongruence due to incomplete lineage sorting, but for such methods to make sense species have to be correctly delimited. Because alternative assignments of individuals to species result in different parametric models, model selection methods can be applied to optimise model of species classification. In a Bayesian framework, Bayes factors (BF, based on marginal likelihood estimates, can be used to test a range of possible classifications for the group under study. Here, we explore BF and the Akaike Information Criterion (AIC to discriminate between different species classifications in the flowering plant lineage Silene sect. Cryptoneurae (Caryophyllaceae. We estimated marginal likelihoods for different species classification models via the Path Sampling (PS, Stepping Stone sampling (SS, and Harmonic Mean Estimator (HME methods implemented in BEAST. To select among alternative species classification models a posterior simulation-based analog of the AIC through Markov chain Monte Carlo analysis (AICM was also performed. The results are compared to outcomes from the software BP&P. Our results agree with another recent study that marginal likelihood estimates from PS and SS methods are useful for comparing different species classifications, and strongly support the recognition of the newly described species S. ertekinii.

  17. Comparison of sinogram- and image-domain penalized-likelihood image reconstruction estimators.

    Science.gov (United States)

    Vargas, Phillip A; La Rivière, Patrick J

    2011-08-01

    In recent years, the authors and others have been exploring the use of penalized-likelihood sinogram-domain smoothing and restoration approaches for emission and transmission tomography. The motivation for this strategy was initially pragmatic: to provide a more computationally feasible alternative to fully iterative penalized-likelihood image reconstruction involving expensive backprojections and reprojections, while still obtaining some of the benefits of the statistical modeling employed in penalized-likelihood approaches. In this work, the authors seek to compare the two approaches in greater detail. The sinogram-domain strategy entails estimating the "ideal" line integrals needed for reconstruction of an activity or attenuation distribution from the set of noisy, potentially degraded tomographic measurements by maximizing a penalized-likelihood objective function. The objective function models the data statistics as well as any degradation that can be represented in the sinogram domain. The estimated line integrals can then be input to analytic reconstruction algorithms such as filtered backprojection (FBP). The authors compare this to fully iterative approaches maximizing similar objective functions. The authors present mathematical analyses based on so-called equivalent optimization problems that establish that the approaches can be made precisely equivalent under certain restrictive conditions. More significantly, by use of resolution-variance tradeoff studies, the authors show that they can yield very similar performance under more relaxed, realistic conditions. The sinogram- and image-domain approaches are equivalent under certain restrictive conditions and can perform very similarly under more relaxed conditions. The match is particularly good for fully sampled, high-resolution CT geometries. One limitation of the sinogram-domain approach relative to the image-domain approach is the difficulty of imposing additional constraints, such as image non-negativity.

  18. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  19. Maximum likelihood based classification of electron tomographic data.

    Science.gov (United States)

    Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan

    2011-01-01

    Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.

  20. Inherent Difficulties of Non-Bayesian Likelihood-based Inference, as Revealed by an Examination of a Recent Book by Aitkin

    OpenAIRE

    Gelman, Andrew; Robert, Christian P.; Rousseau, Judith

    2010-01-01

    For many decades, statisticians have made attempts to prepare the Bayesian omelette without breaking the Bayesian eggs; that is, to obtain probabilistic likelihood-based inferences without relying on informative prior distributions. A recent example is Murray Aitkin's recent book, {\\em Statistical Inference}, which presents an approach to statistical hypothesis testing based on comparisons of posterior distributions of likelihoods under competing models. Aitkin develops and illustrates his me...

  1. A family-based likelihood ratio test for general pedigree structures that allows for genotyping error and missing data.

    Science.gov (United States)

    Yang, Yang; Wise, Carol A; Gordon, Derek; Finch, Stephen J

    2008-01-01

    The purpose of this work is the development of a family-based association test that allows for random genotyping errors and missing data and makes use of information on affected and unaffected pedigree members. We derive the conditional likelihood functions of the general nuclear family for the following scenarios: complete parental genotype data and no genotyping errors; only one genotyped parent and no genotyping errors; no parental genotype data and no genotyping errors; and no parental genotype data with genotyping errors. We find maximum likelihood estimates of the marker locus parameters, including the penetrances and population genotype frequencies under the null hypothesis that all penetrance values are equal and under the alternative hypothesis. We then compute the likelihood ratio test. We perform simulations to assess the adequacy of the central chi-square distribution approximation when the null hypothesis is true. We also perform simulations to compare the power of the TDT and this likelihood-based method. Finally, we apply our method to 23 SNPs genotyped in nuclear families from a recently published study of idiopathic scoliosis (IS). Our simulations suggest that this likelihood ratio test statistic follows a central chi-square distribution with 1 degree of freedom under the null hypothesis, even in the presence of missing data and genotyping errors. The power comparison shows that this likelihood ratio test is more powerful than the original TDT for the simulations considered. For the IS data, the marker rs7843033 shows the most significant evidence for our method (p = 0.0003), which is consistent with a previous report, which found rs7843033 to be the 2nd most significant TDTae p value among a set of 23 SNPs.

  2. Likelihood based inference for partially observed renewal processes

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2016-01-01

    This paper is concerned with inference for renewal processes on the real line that are observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point process theory to propose a Monte Carlo maximum likelihoo

  3. Likelihood-Based Association Analysis for Nuclear Families and Unrelated Subjects with Missing Genotype Data

    OpenAIRE

    Dudbridge, Frank

    2008-01-01

    Missing data occur in genetic association studies for several reasons including missing family members and uncertain haplotype phase. Maximum likelihood is a commonly used approach to accommodate missing data, but it can be difficult to apply to family-based association studies, because of possible loss of robustness to confounding by population stratification. Here a novel likelihood for nuclear families is proposed, in which distinct sets of association parameters are used to model the pare...

  4. Generalized Correlation Coefficient Based on Log Likelihood Ratio Test Statistic

    Directory of Open Access Journals (Sweden)

    Liu Hsiang-Chuan

    2016-01-01

    Full Text Available In this paper, I point out that both Joe’s and Ding’s strength statistics can only be used for testing the pair-wise independence, and I propose a novel G-square based strength statistic, called Liu’s generalized correlation coefficient, it can be used to detect and compare the strength of not only the pair-wise independence but also the mutual independence of any multivariate variables. Furthermore, I proved that only Liu’s generalized correlation coefficient is strictly increasing on its number of variables, it is more sensitive and useful than Cramer’s V coefficient, in other words, Liu generalized correlation coefficient is not only the G-square based strength statistic, but also an improved statistic for detecting and comparing the strengths of deferent associations of any two or more sets of multivariate variables, moreover, this new strength statistic can also be tested by G2.

  5. Comparison between artificial neural networks and maximum likelihood classification in digital soil mapping

    Directory of Open Access Journals (Sweden)

    César da Silva Chagas

    2013-04-01

    Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.

  6. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  7. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  8. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    DEFF Research Database (Denmark)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk;

    2014-01-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR) ...

  9. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  10. Likelihood-based scoring rules for comparing density forecasts in tails

    NARCIS (Netherlands)

    Diks, C.; Panchenko, V.; van Dijk, D.

    2011-01-01

    We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler d

  11. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  12. Philosophy and phylogenetic inference: a comparison of likelihood and parsimony methods in the context of Karl Popper's writings on corroboration.

    Science.gov (United States)

    de Queiroz, K; Poe, S

    2001-06-01

    Advocates of cladistic parsimony methods have invoked the philosophy of Karl Popper in an attempt to argue for the superiority of those methods over phylogenetic methods based on Ronald Fisher's statistical principle of likelihood. We argue that the concept of likelihood in general, and its application to problems of phylogenetic inference in particular, are highly compatible with Popper's philosophy. Examination of Popper's writings reveals that his concept of corroboration is, in fact, based on likelihood. Moreover, because probabilistic assumptions are necessary for calculating the probabilities that define Popper's corroboration, likelihood methods of phylogenetic inference--with their explicit probabilistic basis--are easily reconciled with his concept. In contrast, cladistic parsimony methods, at least as described by certain advocates of those methods, are less easily reconciled with Popper's concept of corroboration. If those methods are interpreted as lacking probabilistic assumptions, then they are incompatible with corroboration. Conversely, if parsimony methods are to be considered compatible with corroboration, then they must be interpreted as carrying implicit probabilistic assumptions. Thus, the non-probabilistic interpretation of cladistic parsimony favored by some advocates of those methods is contradicted by an attempt by the same authors to justify parsimony methods in terms of Popper's concept of corroboration. In addition to being compatible with Popperian corroboration, the likelihood approach to phylogenetic inference permits researchers to test the assumptions of their analytical methods (models) in a way that is consistent with Popper's ideas about the provisional nature of background knowledge.

  13. Integration based profile likelihood calculation for PDE constrained parameter estimation problems

    Science.gov (United States)

    Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.

    2016-12-01

    Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.

  14. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    Science.gov (United States)

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  15. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Science.gov (United States)

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  16. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    Science.gov (United States)

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  17. Conditional likelihood methods for haplotype-based association analysis using matched case-control data.

    Science.gov (United States)

    Chen, Jinbo; Rodriguez, Carmen

    2007-12-01

    Genetic epidemiologists routinely assess disease susceptibility in relation to haplotypes, that is, combinations of alleles on a single chromosome. We study statistical methods for inferring haplotype-related disease risk using single nucleotide polymorphism (SNP) genotype data from matched case-control studies, where controls are individually matched to cases on some selected factors. Assuming a logistic regression model for haplotype-disease association, we propose two conditional likelihood approaches that address the issue that haplotypes cannot be inferred with certainty from SNP genotype data (phase ambiguity). One approach is based on the likelihood of disease status conditioned on the total number of cases, genotypes, and other covariates within each matching stratum, and the other is based on the joint likelihood of disease status and genotypes conditioned only on the total number of cases and other covariates. The joint-likelihood approach is generally more efficient, particularly for assessing haplotype-environment interactions. Simulation studies demonstrated that the first approach was more robust to model assumptions on the diplotype distribution conditioned on environmental risk variables and matching factors in the control population. We applied the two methods to analyze a matched case-control study of prostate cancer.

  18. Algorithms, data structures, and numerics for likelihood-based phylogenetic inference of huge trees

    Directory of Open Access Journals (Sweden)

    Izquierdo-Carrasco Fernando

    2011-12-01

    Full Text Available Abstract Background The rapid accumulation of molecular sequence data, driven by novel wet-lab sequencing technologies, poses new challenges for large-scale maximum likelihood-based phylogenetic analyses on trees with more than 30,000 taxa and several genes. The three main computational challenges are: numerical stability, the scalability of search algorithms, and the high memory requirements for computing the likelihood. Results We introduce methods for solving these three key problems and provide respective proof-of-concept implementations in RAxML. The mechanisms presented here are not RAxML-specific and can thus be applied to any likelihood-based (Bayesian or maximum likelihood tree inference program. We develop a new search strategy that can reduce the time required for tree inferences by more than 50% while yielding equally good trees (in the statistical sense for well-chosen starting trees. We present an adaptation of the Subtree Equality Vector technique for phylogenomic datasets with missing data (already available in RAxML v728 that can reduce execution times and memory requirements by up to 50%. Finally, we discuss issues pertaining to the numerical stability of the Γ model of rate heterogeneity on very large trees and argue in favor of rate heterogeneity models that use a single rate or rate category for each site to resolve these problems. Conclusions We address three major issues pertaining to large scale tree reconstruction under maximum likelihood and propose respective solutions. Respective proof-of-concept/production-level implementations of our ideas are made available as open-source code.

  19. Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    GaoChunwen; XuJingzhen; RichardSinding-Larsen

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  20. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    Science.gov (United States)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  1. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  2. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Science.gov (United States)

    Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W

    2015-04-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  3. Applications of Likelihood-Based Methods for the Reliability Parameter of the Location and Scale Exponential Distribution

    NARCIS (Netherlands)

    van der Duyn Schouten, F.A.; Bar-Lev, S.K.

    2003-01-01

    Based on a type-2 censored sample we consider a likelihood-based inference for the reliability parameter R(t) of the location and scale exponential distribution.More specifically, we derive the profile and marginal likelihoods of R(t).A numerical example is presented demonstrating the flavor of

  4. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  5. Robust maximum-likelihood parameter estimation of stochastic state-space systems based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper addresses the problems of parameter estimation of multivariable stationary stochastic systems on the basis of observed output data. The main contribution is to employ the expectation-maximisation (EM) method as a means for computation of the maximum-likelihood (ML) parameter estimation of the system. Closed form of the expectation of the studied system subjected to Gaussian distribution noise is derived and paraneter choice that maximizes the expectation is also proposed. This results in an iterative algorithm for parameter estimation and the robust algorithm implementation based on technique of QR-factorization and Cholesky factorization is also discussed. Moreover, algorithmic properties such as non-decreasing likelihood value, necessary and sufficient conditions for the algorithm to arrive at a local stationary parameter, the convergence rate and the factors affecting the convergence rate are analyzed. Simulation study shows that the proposed algorithm has attractive properties such as numerical stability, and avoidance of difficult initial conditions.

  6. Maximum likelihood-based iterated divided difference filter for nonlinear systems from discrete noisy measurements.

    Science.gov (United States)

    Wang, Changyuan; Zhang, Jing; Mu, Jing

    2012-01-01

    A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF) is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF), divided difference filter (DDF), iterated unscented Kalman filter (IUKF) and iterated divided difference filter (IDDF) both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate.

  7. Maximum Likelihood-Based Iterated Divided Difference Filter for Nonlinear Systems from Discrete Noisy Measurements

    Directory of Open Access Journals (Sweden)

    Changyuan Wang

    2012-06-01

    Full Text Available A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF, divided difference filter (DDF, iterated unscented Kalman filter (IUKF and iterated divided difference filter (IDDF both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate.

  8. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik;

    2016-01-01

    This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms...

  9. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    Science.gov (United States)

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.

  10. Quantitative comparison of OSEM and penalized likelihood image reconstruction using relative difference penalties for clinical PET.

    Science.gov (United States)

    Ahn, Sangtae; Ross, Steven G; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D; Manjeshwar, Ravindra M

    2015-08-07

    Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.

  11. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    Zhi-hua SUN

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  12. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  13. Likelihood-based inference for cointegration with nonlinear error-correction

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders Christian

    2010-01-01

    We consider a class of nonlinear vector error correction models where the transfer function (or loadings) of the stationary relationships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long-run cointegration parameters, and the short-run parameters. Asymptotic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normality can be found. A simulation study...

  14. A comparison of likelihood ratio tests and Rao's score test for three separable covariance matrix structures.

    Science.gov (United States)

    Filipiak, Katarzyna; Klein, Daniel; Roy, Anuradha

    2017-01-01

    The problem of testing the separability of a covariance matrix against an unstructured variance-covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first-order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ(2) distribution. The tests are implemented on a real dataset from medical studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Unconditional efficient one-sided confidence limits for the odds ratio based on conditional likelihood.

    Science.gov (United States)

    Lloyd, Chris J; Moldovan, Max V

    2007-12-10

    We compare various one-sided confidence limits for the odds ratio in a 2 x 2 table. The first group of limits relies on first-order asymptotic approximations and includes limits based on the (signed) likelihood ratio, score and Wald statistics. The second group of limits is based on the conditional tilted hypergeometric distribution, with and without mid-P correction. All these limits have poor unconditional coverage properties and so we apply the general transformation of Buehler (J. Am. Statist. Assoc. 1957; 52:482-493) to obtain limits which are unconditionally exact. The performance of these competing exact limits is assessed across a range of sample sizes and parameter values by looking at their mean size. The results indicate that Buehler limits generated from the conditional likelihood have the best performance, with a slight preference for the mid-P version. This confidence limit has not been proposed before and is recommended for general use, especially when the underlying probabilities are not extreme.

  16. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Waligorska, Marta [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland); Molski, Andrzej, E-mail: amolski@amu.edu.pl [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland)

    2012-07-25

    Highlights: Black-Right-Pointing-Pointer We study model selection and parameter recovery from single-molecule FRET experiments. Black-Right-Pointing-Pointer We examine the maximum likelihood-based analysis of two-color photon trajectories. Black-Right-Pointing-Pointer The number of observed photons determines the performance of the method. Black-Right-Pointing-Pointer For long trajectories, one can extract mean dwell times that are comparable to inter-photon times. -- Abstract: When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Foerster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  17. A time-based likelihood approach for the PANDA Barrel DIRC detector

    Energy Technology Data Exchange (ETDEWEB)

    Dzhygadlo, Roman; Goetzen, Klaus; Schwarz, Carsten; Schwiening, Jochen [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Kalicy, Grzegorz; Patsyuk, Maria; Peters, Klaus; Zuehlsdorf, Marko [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Goethe-Universitaet Frankfurt (Germany); Kumawat, Harphool [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Bhabha Atomic Research Centre, Mumbai (India); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA experiment at the future Facility for Antiproton and Ion Research in Europe GmbH (FAIR) at GSI, Darmstadt will study fundamental questions of hadron physics and QCD using high-intensity cooled antiproton beams with momenta between 1.5 and 15 GeV/c. Efficient Particle Identification (PID) for a wide momentum range and the full solid angle is required for reconstructing the various physics channels of the PANDA program. Hadronic PID in the barrel region of the PANDA detector will be provided by a DIRC (Detection of Internally Reflected Cherenkov light) counter. The design is based on the successful BABAR DIRC with several key improvements. This contribution presents simulation studies of a barrel DIRC design based on wide radiator plates instead of narrow bars and a PID method using a time-based likelihood approach to make optimum use of the precision timing of this new counter.

  18. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    Science.gov (United States)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  19. An extended-source spatial acquisition process based on maximum likelihood criterion for planetary optical communications

    Science.gov (United States)

    Yan, Tsun-Yee

    1992-01-01

    This paper describes an extended-source spatial acquisition process based on the maximum likelihood criterion for interplanetary optical communications. The objective is to use the sun-lit Earth image as a receiver beacon and point the transmitter laser to the Earth-based receiver to establish a communication path. The process assumes the existence of a reference image. The uncertainties between the reference image and the received image are modeled as additive white Gaussian disturbances. It has been shown that the optimal spatial acquisition requires solving two nonlinear equations to estimate the coordinates of the transceiver from the received camera image in the transformed domain. The optimal solution can be obtained iteratively by solving two linear equations. Numerical results using a sample sun-lit Earth as a reference image demonstrate that sub-pixel resolutions can be achieved in a high disturbance environment. Spatial resolution is quantified by Cramer-Rao lower bounds.

  20. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  1. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  2. Hypothesis support measured by likelihood in the presence of nuisance parameters, multiple studies, and multiple comparisons

    CERN Document Server

    Bickel, David R

    2011-01-01

    By leveraging recent advances in J. Rissanen's information-theoretic approach to model selection that has historical roots in the minimum description length (MDL) of a message, the Bayes-compatibility criterion of A. W. F. Edwards is recast in terms of predictive distributions to make the measure of support robust not only in the presence of nuisance parameters but also in the presence of multiple studies and multiple comparisons within each study. To qualify as a measure of support, a statistic must asymptotically approach the difference between the posterior and prior log-odds, where the parameter distributions considered are physical in the empirical Bayes or random effects sense that they correspond to real frequencies or proportions, e.g., of hypotheses, whether or not the distributions can be estimated and even whether or not data pertaining to the hypotheses are available. Because that Bayes-compatibility condition is weak, an optimality criterion is needed to uniquely specify a measure of support. Two...

  3. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    Science.gov (United States)

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd.

  4. A Fast Algorithm for Maximum Likelihood-based Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2015-01-01

    Print Request Permissions Periodic signals are encountered in many applications. Such signals can be modelled by a weighted sum of sinusoidal components whose frequencies are integer multiples of a fundamental frequency. Given a data set, the fundamental frequency can be estimated in many ways...... including a maximum likelihood (ML) approach. Unfortunately, the ML estimator has a very high computational complexity, and the more inaccurate, but faster correlation-based estimators are therefore often used instead. In this paper, we propose a fast algorithm for the evaluation of the ML cost function...... for complex-valued data over all frequencies on a Fourier grid and up to a maximum model order. The proposed algorithm significantly reduces the computational complexity to a level not far from the complexity of the popular harmonic summation method which is an approximate ML estimator....

  5. Maximum-likelihood detection based on branch and bound algorithm for MIMO systems

    Institute of Scientific and Technical Information of China (English)

    LI Zi; CAI YueMing

    2008-01-01

    Maximum likelihood detection for MIMO systems can be formulated as an integer quadratic programming problem. In this paper, we introduce depth-first branch and bound algorithm with variable dichotomy into MIMO detection. More nodes may be pruned with this structure. At each stage of the branch and bound algorithm, active set algorithm is adopted to solve the dual subproblem. In order to reduce the com- plexity further, the Cholesky factorization update is presented to solve the linear system at each iteration of active set algorithm efficiently. By relaxing the pruning conditions, we also present the quasi branch and bound algorithm which imple- ments a good tradeoff between performance and complexity. Numerical results show that the complexity of MIMO detection based on branch and bound algorithm is very low, especially in low SNR and large constellations.

  6. Likelihood based observability analysis and confidence intervals for predictions of dynamic models

    CERN Document Server

    Kreutz, Clemens; Timmer, Jens

    2011-01-01

    Mechanistic dynamic models of biochemical networks such as Ordinary Differential Equations (ODEs) contain unknown parameters like the reaction rate constants and the initial concentrations of the compounds. The large number of parameters as well as their nonlinear impact on the model responses hamper the determination of confidence regions for parameter estimates. At the same time, classical approaches translating the uncertainty of the parameters into confidence intervals for model predictions are hardly feasible. In this article it is shown that a so-called prediction profile likelihood yields reliable confidence intervals for model predictions, despite arbitrarily complex and high-dimensional shapes of the confidence regions for the estimated parameters. Prediction confidence intervals of the dynamic states allow a data-based observability analysis. The approach renders the issue of sampling a high-dimensional parameter space into evaluating one-dimensional prediction spaces. The method is also applicable ...

  7. Maximum Likelihood A Priori Knowledge Interpolation-Based Handset Mismatch Compensation for Robust Speaker Identification

    Institute of Scientific and Technical Information of China (English)

    LIAO Yuanfu; ZHUANG Zhixian; YANG Jyhher

    2008-01-01

    Unseen handset mismatch is the major source of performance degradation in speaker identifica-tion in telecommunication environments.To alleviate the problem,a maximum likelihood a priori knowledge interpolation (ML-AKI)-based handset mismatch compensation approach is proposed.It first collects a set of handset characteristics of seen handsets to use as the a priori knowledge for representing the space of handsets.During evaluation the characteristics of an unknown test handset are optimally estimated by in-terpolation from the set of the a pdod knowledge.Experimental results on the HTIMIT database show that the ML-AKI method can improve the average speaker identification rate from 60.0% to 74.6% as compared with conventional maximum a posteriori-adapted Gaussian mixture models.The proposed ML-AKI method is a promising method for robust speaker identification.

  8. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    Science.gov (United States)

    Waligórska, Marta; Molski, Andrzej

    2012-07-01

    When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Förster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  9. Efficient Maximum Likelihood Estimation of a 2-D Complex Sinusoidal Based on Barycentric Interpolation

    CERN Document Server

    Selva, J

    2011-01-01

    This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.

  10. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.

    Science.gov (United States)

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J

    2017-03-03

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  11. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods

    Directory of Open Access Journals (Sweden)

    Anthony Hoak

    2017-03-01

    Full Text Available We develop an interactive likelihood (ILH for sequential Monte Carlo (SMC methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL and TUD-Stadtmitte using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA and classification of events, activities and relationships for multi-object trackers (CLEAR MOT. In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  12. Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.

    Science.gov (United States)

    Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens

    2016-01-01

    In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.

  13. Penalized likelihood PET image reconstruction using patch-based edge-preserving regularization.

    Science.gov (United States)

    Wang, Guobao; Qi, Jinyi

    2012-12-01

    Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization that penalizes image intensity difference between neighboring pixels. The most commonly used quadratic penalty often oversmoothes edges and fine features in reconstructed images. Nonquadratic penalties can preserve edges but often introduce piece-wise constant blocky artifacts and the results are also sensitive to the hyper-parameter that controls the shape of the penalty function. This paper presents a patch-based regularization for iterative image reconstruction that uses neighborhood patches instead of individual pixels in computing the nonquadratic penalty. The new regularization is more robust than the conventional pixel-based regularization in differentiating sharp edges from random fluctuations due to noise. An optimization transfer algorithm is developed for the penalized maximum likelihood estimation. Each iteration of the algorithm can be implemented in three simple steps: an EM-like image update, an image smoothing and a pixel-by-pixel image fusion. Computer simulations show that the proposed patch-based regularization can achieve higher contrast recovery for small objects without increasing background variation compared with the quadratic regularization. The reconstruction is also more robust to the hyper-parameter than conventional pixel-based nonquadratic regularizations. The proposed regularization method has been applied to real 3-D PET data.

  14. A likelihood-based method for haplotype association studies of case-control data with genotyping uncertainty

    Institute of Scientific and Technical Information of China (English)

    ZHU; Wensheng; GUO; Jianhua

    2006-01-01

    This paper discusses the associations between traits and haplotypes based on Fl (fluorescent intensity) data sets, We consider a clustering algorithm based on mixtures of t distributions to obtain all possible genotypes of each individual (i.e. "GenoSpectrum"). We then propose a likelihood-based approach that incorporates the genotyping uncertainty to assessing the associations between traits and haplotypes through a haplotypebased logistic regression model, Simulation studies show that our likelihood-based method can reduce the impact induced by genotyping errors.

  15. Likelihood-Based Association Analysis for Nuclear Families and Unrelated Subjects with Missing Genotype Data

    National Research Council Canada - National Science Library

    Dudbridge, Frank

    2008-01-01

    ... by population stratification. Here a novel likelihood for nuclear families is proposed, in which distinct sets of association parameters are used to model the parental genotypes and the offspring genotypes...

  16. Debris Likelihood, based on GhostNet, NASA Aqua MODIS, and GOES Imager, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Debris Likelihood Index (Estimated) is calculated from GhostNet, NASA Aqua MODIS Chl a and NOAA GOES Imager SST data. THIS IS AN EXPERIMENTAL PRODUCT: intended...

  17. Semi-empiricial Likelihood Confidence Intervals for the Differences of Two Populations Based on Fractional Imputation

    Institute of Scientific and Technical Information of China (English)

    BAI YUN-XIA; QIN YONG-SONG; WANG LI-RONG; LI LING

    2009-01-01

    Suppose that there axe two populations x and y with missing data on both of them, where x has a distribution function F(.) which is unknown and y has form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.

  18. Estimating nonlinear dynamic equilibrium economies: a likelihood approach

    OpenAIRE

    2004-01-01

    This paper presents a framework to undertake likelihood-based inference in nonlinear dynamic equilibrium economies. The authors develop a sequential Monte Carlo algorithm that delivers an estimate of the likelihood function of the model using simulation methods. This likelihood can be used for parameter estimation and for model comparison. The algorithm can deal both with nonlinearities of the economy and with the presence of non-normal shocks. The authors show consistency of the estimate and...

  19. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    Science.gov (United States)

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence.

  20. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    Science.gov (United States)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  1. Pseudo-empirical Likelihood-Based Method Using Calibration for Longitudinal Data with Drop-Out.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua; Chan, Kwun Chuen Gary

    2015-01-01

    In observational studies, interest mainly lies in estimation of the population-level relationship between the explanatory variables and dependent variables, and the estimation is often undertaken using a sample of longitudinal data. In some situations, the longitudinal data sample features biases and loss of estimation efficiency due to non-random drop-out. However, inclusion of population-level information can increase estimation efficiency. In this paper we propose an empirical likelihood-based method to incorporate population-level information in a longitudinal study with drop-out. The population-level information is incorporated via constraints on functions of the parameters, and non-random drop-out bias is corrected by using a weighted generalized estimating equations method. We provide a three-step estimation procedure that makes computation easier. Some commonly used methods are compared in simulation studies, which demonstrate that our proposed method can correct the non-random drop-out bias and increase the estimation efficiency, especially for small sample size or when the missing proportion is high. In some situations, the efficiency improvement is substantial. Finally, we apply this method to an Alzheimer's disease study.

  2. The optical synthetic aperture image restoration based on the improved maximum-likelihood algorithm

    Science.gov (United States)

    Geng, Zexun; Xu, Qing; Zhang, Baoming; Gong, Zhihui

    2012-09-01

    Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML) algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the average contrast evaluation indexes.

  3. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution

    Directory of Open Access Journals (Sweden)

    Huanxin Zou

    2016-07-01

    Full Text Available The simple linear iterative clustering (SLIC method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD. Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images.

  4. Empirical Likelihood based Confidence Regions for first order parameters of a heavy tailed distribution

    CERN Document Server

    Worms, Julien

    2010-01-01

    Let $X_1, \\ldots, X_n$ be some i.i.d. observations from a heavy tailed distribution $F$, i.e. such that the common distribution of the excesses over a high threshold $u_n$ can be approximated by a Generalized Pareto Distribution $G_{\\gamma,\\sigma_n}$ with $\\gamma >0$. This work is devoted to the problem of finding confidence regions for the couple $(\\gamma,\\sigma_n)$ : combining the empirical likelihood methodology with estimation equations (close but not identical to the likelihood equations) introduced by J. Zhang (Australian and New Zealand J. Stat n.49(1), 2007), asymptotically valid confidence regions for $(\\gamma,\\sigma_n)$ are obtained and proved to perform better than Wald-type confidence regions (especially those derived from the asymptotic normality of the maximum likelihood estimators). By profiling out the scale parameter, confidence intervals for the tail index are also derived.

  5. Likelihood-Based Hypothesis Tests for Brain Activation Detection From MRI Data Disturbed by Colored Noise: A Simulation Study

    NARCIS (Netherlands)

    Den Dekker, A.J.; Poot, D.H.J.; Bos, R.; Sijbers, J.

    2009-01-01

    Functional magnetic resonance imaging (fMRI) data that are corrupted by temporally colored noise are generally preprocessed (i.e., prewhitened or precolored) prior to functional activation detection. In this paper, we propose likelihood-based hypothesis tests that account for colored noise directly

  6. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik

    2016-01-01

    . This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...

  7. Seizure detection in adult ICU patients based on changes in EEG synchronization likelihood

    NARCIS (Netherlands)

    Slooter, A. J. C.; Vriens, E. M.; Spijkstra, J. J.; Girbes, A. R. J.; van Huffelen, A. C.; Stam, C. J.

    2006-01-01

    Introduction: Seizures are common in Intensive Care Unit (ICU) patients, and may increase neuronal injury. Purpose: To explore the possible value of synchronization likelihood (SL) for the automatic detection of seizures in adult ICU patients. Methods: We included EEGs from ICU patients with a varie

  8. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

    Science.gov (United States)

    Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

    2014-07-30

    Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples.

  9. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    Science.gov (United States)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  10. Performance comparison of various maximum likelihood nonlinear mixed-effects estimation methods for dose-response models.

    Science.gov (United States)

    Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie

    2012-09-01

    Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.

  11. Simultaneous maximum-likelihood reconstruction for x-ray grating based phase-contrast tomography avoiding intermediate phase retrieval

    CERN Document Server

    Ritter, André; Durst, Jürgen; Gödel, Karl; Haas, Wilhelm; Michel, Thilo; Rieger, Jens; Weber, Thomas; Wucherer, Lukas; Anton, Gisela

    2013-01-01

    Phase-wrapping artifacts, statistical image noise and the need for a minimum amount of phase steps per projection limit the practicability of x-ray grating based phase-contrast tomography, when using filtered back projection reconstruction. For conventional x-ray computed tomography, the use of statistical iterative reconstruction algorithms has successfully reduced artifacts and statistical issues. In this work, an iterative reconstruction method for grating based phase-contrast tomography is presented. The method avoids the intermediate retrieval of absorption, differential phase and dark field projections. It directly reconstructs tomographic cross sections from phase stepping projections by the use of a forward projecting imaging model and an appropriate likelihood function. The likelihood function is then maximized with an iterative algorithm. The presented method is tested with tomographic data obtained through a wave field simulation of grating based phase-contrast tomography. The reconstruction result...

  12. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  13. Evaluation of Bayesian source estimation methods with Prairie Grass observations and Gaussian plume model: A comparison of likelihood functions and distance measures

    Science.gov (United States)

    Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko

    2017-03-01

    Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.

  14. Survey of Branch Support Methods Demonstrates Accuracy, Power, and Robustness of Fast Likelihood-based Approximation Schemes

    Science.gov (United States)

    Anisimova, Maria; Gil, Manuel; Dufayard, Jean-François; Dessimoz, Christophe; Gascuel, Olivier

    2011-01-01

    Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira–Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira–Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based methods. PMID:21540409

  15. Evidence Based Medicine; Positive and Negative Likelihood Ratios of Diagnostic Tests

    Directory of Open Access Journals (Sweden)

    Alireza Baratloo

    2015-10-01

    Full Text Available In the previous two parts of educational manuscript series in Emergency, we explained some screening characteristics of diagnostic tests including accuracy, sensitivity, specificity, and positive and negative predictive values. In the 3rd  part we aimed to explain positive and negative likelihood ratio (LR as one of the most reliable performance measures of a diagnostic test. To better understand this characteristic of a test, it is first necessary to fully understand the concept of sensitivity and specificity. So we strongly advise you to review the 1st part of this series again. In short, the likelihood ratios are about the percentage of people with and without a disease but having the same test result. The prevalence of a disease can directly influence screening characteristics of a diagnostic test, especially its sensitivity and specificity. Trying to eliminate this effect, LR was developed. Pre-test probability of a disease multiplied by positive or negative LR can estimate post-test probability. Therefore, LR is the most important characteristic of a test to rule out or rule in a diagnosis. A positive likelihood ratio > 1 means higher probability of the disease to be present in a patient with a positive test. The further from 1, either higher or lower, the stronger the evidence to rule in or rule out the disease, respectively. It is obvious that tests with LR close to one are less practical. On the other hand, LR further from one will have more value for application in medicine. Usually tests with 0.1 < LR > 10 are considered suitable for implication in routine practice.

  16. Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids

    DEFF Research Database (Denmark)

    Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt;

    2014-01-01

    We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...

  17. Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids

    DEFF Research Database (Denmark)

    Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt

    2014-01-01

    We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...

  18. Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.

  19. Adaptive speckle reduction of ultrasound images based on maximum likelihood estimation

    Institute of Scientific and Technical Information of China (English)

    Xu Liu(刘旭); Yongfeng Huang(黄永锋); Wende Shou(寿文德); Tao Ying(应涛)

    2004-01-01

    A method has been developed in this paper to gain effective speckle reduction in medical ultrasound images.To exploit full knowledge of the speckle distribution, here maximum likelihood was used to estimate speckle parameters corresponding to its statistical mode. Then the results were incorporated into the nonlinear anisotropic diffusion to achieve adaptive speckle reduction. Verified with simulated and ultrasound images,we show that this algorithm is capable of enhancing features of clinical interest and reduces speckle noise more efficiently than just applying classical filters. To avoid edge contribution, changes of contrast-to-noise ratio of different regions are also compared to investigate the performance of this approach.

  20. A likelihood ratio test for species membership based on DNA sequence data

    DEFF Research Database (Denmark)

    Matz, Mikhail V.; Nielsen, Rasmus

    2005-01-01

    DNA barcoding as an approach for species identification is rapidly increasing in popularity. However, it remains unclear which statistical procedures should accompany the technique to provide a measure of uncertainty. Here we describe a likelihood ratio test which can be used to test if a sampled...... sequence is a member of an a priori specified species. We investigate the performance of the test using coalescence simulations, as well as using the real data from butterflies and frogs representing two kinds of challenge for DNA barcoding: extremely low and extremely high levels of sequence variability....

  1. Inference for the Sharpe Ratio Using a Likelihood-Based Approach

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2012-01-01

    Full Text Available The Sharpe ratio is the prominent risk-adjusted performance measure used by practitioners. Statistical testing of this ratio using its asymptotic distribution has lagged behind its use. In this paper, highly accurate likelihood analysis is applied for inference on the Sharpe ratio. Both the one- and two-sample problems are considered. The methodology has O(n−3/2 distributional accuracy and can be implemented using any parametric return distribution structure. Simulations are provided to demonstrate the method's superior accuracy over existing methods used for testing in the literature.

  2. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    Science.gov (United States)

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  3. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2004-01-01

    Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.

  4. Topologies of the conditional ancestral trees and full-likelihood-based inference in the general coalescent tree framework.

    Science.gov (United States)

    Sargsyan, Ori

    2010-08-01

    The general coalescent tree framework is a family of models for determining ancestries among random samples of DNA sequences at a nonrecombining locus. The ancestral models included in this framework can be derived under various evolutionary scenarios. Here, a computationally tractable full-likelihood-based inference method for neutral polymorphisms is presented, using the general coalescent tree framework and the infinite-sites model for mutations in DNA sequences. First, an exact sampling scheme is developed to determine the topologies of conditional ancestral trees. However, this scheme has some computational limitations and to overcome these limitations a second scheme based on importance sampling is provided. Next, these schemes are combined with Monte Carlo integrations to estimate the likelihood of full polymorphism data, the ages of mutations in the sample, and the time of the most recent common ancestor. In addition, this article shows how to apply this method for estimating the likelihood of neutral polymorphism data in a sample of DNA sequences completely linked to a mutant allele of interest. This method is illustrated using the data in a sample of DNA sequences at the APOE gene locus.

  5. Conditional-likelihood approach to material decomposition in spectral absorption-based or phase-contrast CT

    Science.gov (United States)

    Baturin, Pavlo

    2015-03-01

    Material decomposition in absorption-based X-ray CT imaging suffers certain inefficiencies when differentiating among soft tissue materials. To address this problem, decomposition techniques turn to spectral CT, which has gained popularity over the last few years. Although proven to be more effective, such techniques are primarily limited to the identification of contrast agents and soft and bone-like materials. In this work, we introduce a novel conditional likelihood, material-decomposition method capable of identifying any type of material objects scanned by spectral CT. The method takes advantage of the statistical independence of spectral data to assign likelihood values to each of the materials on a pixel-by-pixel basis. It results in likelihood images for each material, which can be further processed by setting certain conditions or thresholds, to yield a final material-diagnostic image. The method can also utilize phase-contrast CT (PCI) data, where measured absorption and phase-shift information can be treated as statistically independent datasets. In this method, the following cases were simulated: (i) single-scan PCI CT, (ii) spectral PCI CT, (iii) absorption-based spectral CT, and (iv) single-scan PCI CT with an added tumor mass. All cases were analyzed using a digital breast phantom; although, any other objects or materials could be used instead. As a result, all materials were identified, as expected, according to their assignment in the digital phantom. Materials with similar attenuation or phase-shift values (e.g., glandular tissue, skin, and tumor masses) were especially successfully when differentiated by the likelihood approach.

  6. Characterization of a likelihood based method and effects of markers informativeness in evaluation of admixture and population group assignment

    Directory of Open Access Journals (Sweden)

    Kranzler Henry R

    2005-10-01

    Full Text Available Abstract Background Detection and evaluation of population stratification are crucial issues in the conduct of genetic association studies. Statistical approaches useful for understanding these issues have been proposed; these methods rely on information gained from genotyping sets of markers that reflect population ancestry. Before using these methods, a set of markers informative for differentiating population genetic substructure (PGS is necessary. We have previously evaluated the performance of a Bayesian clustering method implemented in the software STRUCTURE in detecting PGS with a particular informative marker set. In this study, we implemented a likelihood based method (LBM in evaluating the informativeness of the same selected marker panel, with respect to assessing potential for stratification in samples of European Americans (EAs and African Americans (AAs, that are known to be admixed. LBM calculates the probability of a set of genotypes based on observations in a reference population with known specific allele frequencies for each marker, assuming Hardy Weinberg equilibrium (HWE for each marker and linkage equilibrium among markers. Results In EAs, the assignment accuracy by LBM exceeded 99% using the most efficient marker FY, and reached perfect assignment accuracy using the 10 most efficient markers excluding FY. In AAs, the assignment accuracy reached 96.4% using FY, and >95% when using at least the 9 most efficient markers. The comparison of the observed and reference allele frequencies (which were derived from previous publications and public databases shows that allele frequencies observed in EAs matched the reference group more accurately than allele frequencies observed in AAs. As a result, the LBM performed better in EAs than AAs, as might be expected given the dependence of LBMs on prior knowledge of allele frequencies. Performance was not dependent on sample size. Conclusion The performance of the LBM depends on the

  7. A biclustering algorithm for binary matrices based on penalized Bernoulli likelihood

    KAUST Repository

    Lee, Seokho

    2013-01-31

    We propose a new biclustering method for binary data matrices using the maximum penalized Bernoulli likelihood estimation. Our method applies a multi-layer model defined on the logits of the success probabilities, where each layer represents a simple bicluster structure and the combination of multiple layers is able to reveal complicated, multiple biclusters. The method allows for non-pure biclusters, and can simultaneously identify the 1-prevalent blocks and 0-prevalent blocks. A computationally efficient algorithm is developed and guidelines are provided for specifying the tuning parameters, including initial values of model parameters, the number of layers, and the penalty parameters. Missing-data imputation can be handled in the EM framework. The method is tested using synthetic and real datasets and shows good performance. © 2013 Springer Science+Business Media New York.

  8. Regression analysis based on conditional likelihood approach under semi-competing risks data.

    Science.gov (United States)

    Hsieh, Jin-Jian; Huang, Yu-Ting

    2012-07-01

    Medical studies often involve semi-competing risks data, which consist of two types of events, namely terminal event and non-terminal event. Because the non-terminal event may be dependently censored by the terminal event, it is not possible to make inference on the non-terminal event without extra assumptions. Therefore, this study assumes that the dependence structure on the non-terminal event and the terminal event follows a copula model, and lets the marginal regression models of the non-terminal event and the terminal event both follow time-varying effect models. This study uses a conditional likelihood approach to estimate the time-varying coefficient of the non-terminal event, and proves the large sample properties of the proposed estimator. Simulation studies show that the proposed estimator performs well. This study also uses the proposed method to analyze AIDS Clinical Trial Group (ACTG 320).

  9. Forensic Automatic Speaker Recognition Based on Likelihood Ratio Using Acoustic-phonetic Features Measured Automatically

    Directory of Open Access Journals (Sweden)

    Huapeng Wang

    2015-01-01

    Full Text Available Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence. This paper proposes a new method of forensic automatic speaker recognition using the likelihood ratio framework to quantify the strength of voice evidence. The proposed method uses a reference database to calculate the within- and between-speaker variability. Some acoustic-phonetic features are extracted automatically using the software VoiceSauce. The effectiveness of the approach was tested using two Mandarin databases: A mobile telephone database and a landline database. The experiment's results indicate that these acoustic-phonetic features do have some discriminating potential and are worth trying in discrimination. The automatic acoustic-phonetic features have acceptable discriminative performance and can provide more reliable results in evidence analysis when fused with other kind of voice features.

  10. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    Science.gov (United States)

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  11. Segmentation of brain magnetic resonance images based on multi-atlas likelihood fusion: testing using data with a broad range of anatomical and photometric profiles

    Directory of Open Access Journals (Sweden)

    Xiaoying eTang

    2015-03-01

    Full Text Available We propose a hierarchical pipeline for skull-stripping and segmentation of anatomical structures of interest from T1-weighted images of the human brain. The pipeline is constructed based on a two-level Bayesian parameter estimation algorithm called multi-atlas likelihood fusion (MALF. In MALF, estimation of the parameter of interest is performed via maximum a posteriori estimation using the expectation-maximization (EM algorithm. The likelihoods of multiple atlases are fused in the E-step while the optimal estimator, a single maximizer of the fused likelihoods, is then obtained in the M-step. There are two stages in the proposed pipeline; first the input T1-weighted image is automatically skull-stripped via a fast MALF, then internal brain structures of interest are automatically extracted using a regular MALF. We assess the performance of each of the two modules in the pipeline based on two sets of images with markedly different anatomical and photometric contrasts; 3T MPRAGE scans of pediatric subjects with developmental disorders versus 1.5T SPGR scans of elderly subjects with dementia. Evaluation is performed quantitatively using the Dice overlap as well as qualitatively via visual inspections. As a result, we demonstrate subject-level differences in the performance of the proposed pipeline, which may be accounted for by age, diagnosis, or the imaging parameters (particularly the field strength. For the subcortical and ventricular structures of the two datasets, the hierarchical pipeline is capable of producing automated segmentations with Dice overlaps ranging from 0.8 to 0.964 when compared with the gold standard. Comparisons with other representative segmentation algorithms are presented, relative to which the proposed hierarchical pipeline demonstrates comparative or superior accuracy.

  12. Molecular systematics of the Jacks (Perciformes: Carangidae) based on mitochondrial cytochrome b sequences using parsimony, likelihood, and Bayesian approaches.

    Science.gov (United States)

    Reed, David L; Carpenter, Kent E; deGravelle, Martin J

    2002-06-01

    The Carangidae represent a diverse family of marine fishes that include both ecologically and economically important species. Currently, there are four recognized tribes within the family, but phylogenetic relationships among them based on morphology are not resolved. In addition, the tribe Carangini contains species with a variety of body forms and no study has tried to interpret the evolution of this diversity. We used DNA sequences from the mitochondrial cytochrome b gene to reconstruct the phylogenetic history of 50 species from each of the four tribes of Carangidae and four carangoid outgroup taxa. We found support for the monophyly of three tribes within the Carangidae (Carangini, Naucratini, and Trachinotini); however, monophyly of the fourth tribe (Scomberoidini) remains questionable. A sister group relationship between the Carangini and the Naucratini is well supported. This clade is apparently sister to the Trachinotini plus Scomberoidini but there is uncertain support for this relationship. Additionally, we examined the evolution of body form within the tribe Carangini and determined that each of the predominant clades has a distinct evolutionary trend in body form. We tested three methods of phylogenetic inference, parsimony, maximum-likelihood, and Bayesian inference. Whereas the three analyses produced largely congruent hypotheses, they differed in several important relationships. Maximum-likelihood and Bayesian methods produced hypotheses with higher support values for deep branches. The Bayesian analysis was computationally much faster and yet produced phylogenetic hypotheses that were very similar to those of the maximum-likelihood analysis. (c) 2002 Elsevier Science (USA).

  13. A comparison of least squares and conditional maximum likelihood estimators under volume endpoint censoring in tumor growth experiments.

    Science.gov (United States)

    Roy Choudhury, Kingshuk; O'Sullivan, Finbarr; Kasman, Ian; Plowman, Greg D

    2012-12-20

    Measurements in tumor growth experiments are stopped once the tumor volume exceeds a preset threshold: a mechanism we term volume endpoint censoring. We argue that this type of censoring is informative. Further, least squares (LS) parameter estimates are shown to suffer a bias in a general parametric model for tumor growth with an independent and identically distributed measurement error, both theoretically and in simulation experiments. In a linear growth model, the magnitude of bias in the LS growth rate estimate increases with the growth rate and the standard deviation of measurement error. We propose a conditional maximum likelihood estimation procedure, which is shown both theoretically and in simulation experiments to yield approximately unbiased parameter estimates in linear and quadratic growth models. Both LS and maximum likelihood estimators have similar variance characteristics. In simulation studies, these properties appear to extend to the case of moderately dependent measurement error. The methodology is illustrated by application to a tumor growth study for an ovarian cancer cell line.

  14. Optimal likelihood-based matching of volcanic sources and deposits in the Auckland Volcanic Field

    Science.gov (United States)

    Kawabata, Emily; Bebbington, Mark S.; Cronin, Shane J.; Wang, Ting

    2016-09-01

    In monogenetic volcanic fields, where each eruption forms a new volcano, focusing and migration of activity over time is a very real possibility. In order for hazard estimates to reflect future, rather than past, behavior, it is vital to assemble as much reliable age data as possible on past eruptions. Multiple swamp/lake records have been extracted from the Auckland Volcanic Field, underlying the 1.4 million-population city of Auckland. We examine here the problem of matching these dated deposits to the volcanoes that produced them. The simplest issue is separation in time, which is handled by simulating prior volcano age sequences from direct dates where known, thinned via ordering constraints between the volcanoes. The subproblem of varying deposition thicknesses (which may be zero) at five locations of known distance and azimuth is quantified using a statistical attenuation model for the volcanic ash thickness. These elements are combined with other constraints, from widespread fingerprinted ash layers that separate eruptions and time-censoring of the records, into a likelihood that was optimized via linear programming. A second linear program was used to optimize over the Monte-Carlo simulated set of prior age profiles to determine the best overall match and consequent volcano age assignments. Considering all 20 matches, and the multiple factors of age, direction, and size/distance simultaneously, results in some non-intuitive assignments which would not be produced by single factor analyses. Compared with earlier work, the results provide better age control on a number of smaller centers such as Little Rangitoto, Otuataua, Taylors Hill, Wiri Mountain, Green Hill, Otara Hill, Hampton Park and Mt Cambria. Spatio-temporal hazard estimates are updated on the basis of the new ordering, which suggest that the scale of the 'flare-up' around 30 ka, while still highly significant, was less than previously thought.

  15. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  16. Likelihood-based genetic mark-recapture estimates when genotype samples are incomplete and contain typing errors.

    Science.gov (United States)

    Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C

    2011-11-01

    Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.

  17. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  18. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  19. Two-sample density-based empirical likelihood tests for incomplete data in application to a pneumonia study.

    Science.gov (United States)

    Vexler, Albert; Yu, Jihnhee

    2011-07-01

    In clinical trials examining the incidence of pneumonia it is a common practice to measure infection via both invasive and non-invasive procedures. In the context of a recently completed randomized trial comparing two treatments the invasive procedure was only utilized in certain scenarios due to the added risk involved, and given that the level of the non-invasive procedure surpassed a given threshold. Hence, what was observed was bivariate data with a pattern of missingness in the invasive variable dependent upon the value of the observed non-invasive observation within a given pair. In order to compare two treatments with bivariate observed data exhibiting this pattern of missingness we developed a semi-parametric methodology utilizing the density-based empirical likelihood approach in order to provide a non-parametric approximation to Neyman-Pearson-type test statistics. This novel empirical likelihood approach has both a parametric and non-parametric components. The non-parametric component utilizes the observations for the non-missing cases, while the parametric component is utilized to tackle the case where observations are missing with respect to the invasive variable. The method is illustrated through its application to the actual data obtained in the pneumonia study and is shown to be an efficient and practical method. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Maximum likelihood amplitude scale estimation for quantization-based watermarking in the presence of Dither

    NARCIS (Netherlands)

    Shterev, I.D.; Lagendijk, R.L.

    2005-01-01

    Quantization-based watermarking schemes comprise a class of watermarking schemes that achieves the channel capacity in terms of additive noise attacks.1 The existence of good high dimensional lattices that can be efficiently implemented2–4 and incorporated into watermarking structures, made quantiza

  1. Probabilistic evaluation of n traces with no putative source: A likelihood ratio based approach in an investigative framework.

    Science.gov (United States)

    De March, I; Sironi, E; Taroni, F

    2016-09-01

    Analysis of marks recovered from different crime scenes can be useful to detect a linkage between criminal cases, even though a putative source for the recovered traces is not available. This particular circumstance is often encountered in the early stage of investigations and thus, the evaluation of evidence association may provide useful information for the investigators. This association is evaluated here from a probabilistic point of view: a likelihood ratio based approach is suggested in order to quantify the strength of the evidence of trace association in the light of two mutually exclusive propositions, namely that the n traces come from a common source or from an unspecified number of sources. To deal with this kind of problem, probabilistic graphical models are used, in form of Bayesian networks and object-oriented Bayesian networks, allowing users to intuitively handle with uncertainty related to the inferential problem.

  2. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  3. Assessing the likelihood and magnitude of volcanic explosions based on seismic quiescence

    Science.gov (United States)

    Roman, Diana C.; Rodgers, Mel; Geirsson, Halldor; LaFemina, Peter C.; Tenorio, Virginia

    2016-09-01

    Volcanic eruptions are generally forecast based on strong increases in monitoring parameters such as seismicity or gas emissions above a relatively low background level (e.g., Voight, 1988; Sparks, 2003). Because of this, forecasting individual explosions during an ongoing eruption, or at persistently restless volcanoes, is difficult as seismicity, gas emissions, and other indicators of unrest are already in a heightened state. Therefore, identification of short-term precursors to individual explosions at volcanoes already in heightened states of unrest, and an understanding of explosion trigger mechanisms, is important for the reduction of volcanic risk worldwide. Seismic and visual observations at Telica Volcano, Nicaragua, demonstrate that a) episodes of seismic quiescence reliably preceded explosions during an eruption in May 2011 and b) the duration of precursory quiescence and the energy released in the ensuing explosion were strongly correlated. Precursory seismic quiescence is interpreted as the result of sealing of shallow gas pathways, leading to pressure accumulation and eventual catastrophic failure of the system, culminating in an explosion. Longer periods of sealing and pressurization lead to greater energy release in the ensuing explosion. Near-real-time observations of seismic quiescence at restless or erupting volcanoes can thus be useful for both timely eruption warnings and for forecasting the energy of impending explosions.

  4. A Clustered Multiclass Likelihood-Ratio Ensemble Method for Family-Based Association Analysis Accounting for Phenotypic Heterogeneity.

    Science.gov (United States)

    Wen, Yalu; Lu, Qing

    2016-09-01

    Although compelling evidence suggests that the genetic etiology of complex diseases could be heterogeneous in subphenotype groups, little attention has been paid to phenotypic heterogeneity in genetic association analysis of complex diseases. Simply ignoring phenotypic heterogeneity in association analysis could result in attenuated estimates of genetic effects and low power of association tests if subphenotypes with similar clinical manifestations have heterogeneous underlying genetic etiologies. To facilitate the family-based association analysis allowing for phenotypic heterogeneity, we propose a clustered multiclass likelihood-ratio ensemble (CMLRE) method. The proposed method provides an alternative way to model the complex relationship between disease outcomes and genetic variants. It allows for heterogeneous genetic causes of disease subphenotypes and can be applied to various pedigree structures. Through simulations, we found CMLRE outperformed the commonly adopted strategies in a variety of underlying disease scenarios. We further applied CMLRE to a family-based dataset from the International Consortium to Identify Genes and Interactions Controlling Oral Clefts (ICOC) to investigate the genetic variants and interactions predisposing to subphenotypes of oral clefts. The analysis suggested that two subphenotypes, nonsyndromic cleft lip without palate (CL) and cleft lip with palate (CLP), shared similar genetic etiologies, while cleft palate only (CP) had its own genetic mechanism. The analysis further revealed that rs10863790 (IRF6), rs7017252 (8q24), and rs7078160 (VAX1) were jointly associated with CL/CLP, while rs7969932 (TBK1), rs227731 (17q22), and rs2141765 (TBK1) jointly contributed to CP.

  5. Model-Based Iterative Reconstruction for Dual-Energy X-Ray CT Using a Joint Quadratic Likelihood Model.

    Science.gov (United States)

    Zhang, Ruoqiao; Thibault, Jean-Baptiste; Bouman, Charles A; Sauer, Ken D; Hsieh, Jiang

    2014-01-01

    Dual-energy X-ray CT (DECT) has the potential to improve contrast and reduce artifacts as compared to traditional CT. Moreover, by applying model-based iterative reconstruction (MBIR) to dual-energy data, one might also expect to reduce noise and improve resolution. However, the direct implementation of dual-energy MBIR requires the use of a nonlinear forward model, which increases both complexity and computation. Alternatively, simplified forward models have been used which treat the material-decomposed channels separately, but these approaches do not fully account for the statistical dependencies in the channels. In this paper, we present a method for joint dual-energy MBIR (JDE-MBIR), which simplifies the forward model while still accounting for the complete statistical dependency in the material-decomposed sinogram components. The JDE-MBIR approach works by using a quadratic approximation to the polychromatic log-likelihood and a simple but exact nonnegativity constraint in the image domain. We demonstrate that our method is particularly effective when the DECT system uses fast kVp switching, since in this case the model accounts for the inaccuracy of interpolated sinogram entries. Both phantom and clinical results show that the proposed model produces images that compare favorably in quality to previous decomposition-based methods, including FBP and other statistical iterative approaches.

  6. Searching for first-degree familial relationships in California's offender DNA database: validation of a likelihood ratio-based approach.

    Science.gov (United States)

    Myers, Steven P; Timken, Mark D; Piucci, Matthew L; Sims, Gary A; Greenwald, Michael A; Weigand, James J; Konzak, Kenneth C; Buoncristiani, Martin R

    2011-11-01

    A validation study was performed to measure the effectiveness of using a likelihood ratio-based approach to search for possible first-degree familial relationships (full-sibling and parent-child) by comparing an evidence autosomal short tandem repeat (STR) profile to California's ∼1,000,000-profile State DNA Index System (SDIS) database. Test searches used autosomal STR and Y-STR profiles generated for 100 artificial test families. When the test sample and the first-degree relative in the database were characterized at the 15 Identifiler(®) (Applied Biosystems(®), Foster City, CA) STR loci, the search procedure included 96% of the fathers and 72% of the full-siblings. When the relative profile was limited to the 13 Combined DNA Index System (CODIS) core loci, the search procedure included 93% of the fathers and 61% of the full-siblings. These results, combined with those of functional tests using three real families, support the effectiveness of this tool. Based upon these results, the validated approach was implemented as a key, pragmatic and demonstrably practical component of the California Department of Justice's Familial Search Program. An investigative lead created through this process recently led to an arrest in the Los Angeles Grim Sleeper serial murders.

  7. FPGA-Based Implementation of All-Digital QPSK Carrier Recovery Loop Combining Costas Loop and Maximum Likelihood Frequency Estimator

    Directory of Open Access Journals (Sweden)

    Kaiyu Wang

    2014-01-01

    Full Text Available This paper presents an efficient all digital carrier recovery loop (ADCRL for quadrature phase shift keying (QPSK. The ADCRL combines classic closed-loop carrier recovery circuit, all digital Costas loop (ADCOL, with frequency feedward loop, maximum likelihood frequency estimator (MLFE so as to make the best use of the advantages of the two types of carrier recovery loops and obtain a more robust performance in the procedure of carrier recovery. Besides, considering that, for MLFE, the accurate estimation of frequency offset is associated with the linear characteristic of its frequency discriminator (FD, the Coordinate Rotation Digital Computer (CORDIC algorithm is introduced into the FD based on MLFE to unwrap linearly phase difference. The frequency offset contained within the phase difference unwrapped is estimated by the MLFE implemented just using some shifter and multiply-accumulate units to assist the ADCOL to lock quickly and precisely. The joint simulation results of ModelSim and MATLAB show that the performances of the proposed ADCRL in locked-in time and range are superior to those of the ADCOL. On the other hand, a systematic design procedure based on FPGA for the proposed ADCRL is also presented.

  8. Tolerance to missing data using a likelihood ratio based classifier for computer-aided classification of breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Bilska-Wolak, Anna O [Department of Biomedical Engineering, Duke University, 2623 DUMC, Durham, NC 27708 (United States); Floyd, Carey E Jr [Department of Biomedical Engineering, Duke University, 2623 DUMC, Durham, NC 27708 (United States)

    2004-09-21

    While mammography is a highly sensitive method for detecting breast tumours, its ability to differentiate between malignant and benign lesions is low, which may result in as many as 70% of unnecessary biopsies. The purpose of this study was to develop a highly specific computer-aided diagnosis algorithm to improve classification of mammographic masses. A classifier based on the likelihood ratio was developed to accommodate cases with missing data. Data for development included 671 biopsy cases (245 malignant), with biopsy-proved outcome. Sixteen features based on the BI-RADS{sup TM} lexicon and patient history had been recorded for the cases, with 1.3 {+-} 1.1 missing feature values per case. Classifier evaluation methods included receiver operating characteristic and leave-one-out bootstrap sampling. The classifier achieved 32% specificity at 100% sensitivity on the 671 cases with 16 features that had missing values. Utilizing just the seven features present for all cases resulted in decreased performance at 100% sensitivity with average 19% specificity. No cases and no feature data were omitted during classifier development, showing that it is more beneficial to utilize cases with missing values than to discard incomplete cases that cannot be handled by many algorithms. Classification of mammographic masses was commendable at high sensitivity levels, indicating that benign cases could be potentially spared from biopsy.

  9. Experimental demonstration of a digital maximum likelihood based feedforward carrier recovery scheme for phase-modulated radio-over-fibre links

    DEFF Research Database (Denmark)

    Guerrero Gonzalez, Neil; Zibar, Darko; Yu, Xianbin

    2008-01-01

    Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission.......Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission....

  10. Functional brain networks in healthy subjects under acupuncture stimulation: An EEG study based on nonlinear synchronization likelihood analysis

    Science.gov (United States)

    Yu, Haitao; Liu, Jing; Cai, Lihui; Wang, Jiang; Cao, Yibin; Hao, Chongqing

    2017-02-01

    Electroencephalogram (EEG) signal evoked by acupuncture stimulation at "Zusanli" acupoint is analyzed to investigate the modulatory effect of manual acupuncture on the functional brain activity. Power spectral density of EEG signal is first calculated based on the autoregressive Burg method. It is shown that the EEG power is significantly increased during and after acupuncture in delta and theta bands, but decreased in alpha band. Furthermore, synchronization likelihood is used to estimate the nonlinear correlation between each pairwise EEG signals. By applying a threshold to resulting synchronization matrices, functional networks for each band are reconstructed and further quantitatively analyzed to study the impact of acupuncture on network structure. Graph theoretical analysis demonstrates that the functional connectivity of the brain undergoes obvious change under different conditions: pre-acupuncture, acupuncture, and post-acupuncture. The minimum path length is largely decreased and the clustering coefficient keeps increasing during and after acupuncture in delta and theta bands. It is indicated that acupuncture can significantly modulate the functional activity of the brain, and facilitate the information transmission within different brain areas. The obtained results may facilitate our understanding of the long-lasting effect of acupuncture on the brain function.

  11. Mapping grey matter reductions in schizophrenia: an anatomical likelihood estimation analysis of voxel-based morphometry studies.

    Science.gov (United States)

    Fornito, A; Yücel, M; Patti, J; Wood, S J; Pantelis, C

    2009-03-01

    Voxel-based morphometry (VBM) is a popular tool for mapping neuroanatomical changes in schizophrenia patients. Several recent meta-analyses have identified the brain regions in which patients most consistently show grey matter reductions, although they have not examined whether such changes reflect differences in grey matter concentration (GMC) or grey matter volume (GMV). These measures assess different aspects of grey matter integrity, and may therefore reflect different pathological processes. In this study, we used the Anatomical Likelihood Estimation procedure to analyse significant differences reported in 37 VBM studies of schizophrenia patients, incorporating data from 1646 patients and 1690 controls, and compared the findings of studies using either GMC or GMV to index grey matter differences. Analysis of all studies combined indicated that grey matter reductions in a network of frontal, temporal, thalamic and striatal regions are among the most frequently reported in literature. GMC reductions were generally larger and more consistent than GMV reductions, and were more frequent in the insula, medial prefrontal, medial temporal and striatal regions. GMV reductions were more frequent in dorso-medial frontal cortex, and lateral and orbital frontal areas. These findings support the primacy of frontal, limbic, and subcortical dysfunction in the pathophysiology of schizophrenia, and suggest that the grey matter changes observed with MRI may not necessarily result from a unitary pathological process.

  12. A likelihood-based approach for assessment of extra-pair paternity and conspecific brood parasitism in natural populations

    Science.gov (United States)

    Lemons, Patrick R.; Marshall, T.C.; McCloskey, Sarah E.; Sethi, S.A.; Schmutz, Joel A.; Sedinger, James S.

    2015-01-01

    Genotypes are frequently used to assess alternative reproductive strategies such as extra-pair paternity and conspecific brood parasitism in wild populations. However, such analyses are vulnerable to genotyping error or molecular artifacts that can bias results. For example, when using multilocus microsatellite data, a mismatch at a single locus, suggesting the offspring was not directly related to its putative parents, can occur quite commonly even when the offspring is truly related. Some recent studies have advocated an ad-hoc rule that offspring must differ at more than one locus in order to conclude that they are not directly related. While this reduces the frequency with which true offspring are identified as not directly related young, it also introduces bias in the opposite direction, wherein not directly related young are categorized as true offspring. More importantly, it ignores the additional information on allele frequencies which would reduce overall bias. In this study, we present a novel technique for assessing extra-pair paternity and conspecific brood parasitism using a likelihood-based approach in a new version of program cervus. We test the suitability of the technique by applying it to a simulated data set and then present an example to demonstrate its influence on the estimation of alternative reproductive strategies.

  13. Phylogenetic estimation with partial likelihood tensors

    CERN Document Server

    Sumner, J G

    2008-01-01

    We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.

  14. Empirical likelihood based inference for second-order diffusion models%二阶扩散模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    王允艳; 张立新; 王汉超

    2012-01-01

    In this paper, we develop an empirical likelihood method to construct empirical likelihood estimators for nonparametric drift and diffusion functions in the second-order diffusion model, and the consistency and asymptotic normality of the empirical likelihood estimators are obtained. Moreover, the nonsymmetric confidence intervals for drift and diffusion functions based on empirical likelihood methods are obtained, and the adjusted empirical log-likelihood ratio is proved to be asymptotically standard chi-square under some mild conditions.%本文利用经验似然方法得到了二阶扩散模型的漂移系数和扩散系数的经验似然估计量,并研究这些估计量的相合性和渐近正态性.进一步在经验似然方法的基础上给出了漂移系数和扩散系数的非对称的置信区间,并且在一定的条件下证明了调整的对数似然比是渐近卡方分布的.

  15. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  16. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at lo...

  17. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  18. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    with validation study, J. Nonparametric Statistics, 1995, 4: 365-394.[15]Wang, Q. H., Estimation of partial linear error-in-variables model, Jourmal of Multivariate Analysis, 1999, 69:30-64.[16]Wang, Q. H., Estimation of linear error-in-covariables models with validation data under random censorship,Journal of Multivariate Analysis, 2000, 74: 245-266.[17]Wang, Q. H., Estimation of partial linear error-in-response models with validation data, Ann. Inst. Statist. Math.,2003, 55(1): 21~39[18]Wang, Q. H., Dimension reduction in partly linear error-in-response model error-in-response models with validation data, Journal of Multivariate Analysis, 2003, 85(2): 234-252.[19]Wang, Q. H., Rao, J. N. K., Empirical likelihood-based in linear errors-in-covariables models with validation data, Biometrika, 2002, 89: 345-358.[20]Owen, A., Empirical likelihood for linear models, Ann. Statist., 1991, 19: 1725-1747.[21]Li, K. C., Sliced inverse regression for dimension reduction (with discussion), J. Amer. Statist. Assoc., 1991,86: 337-342.[22]Duan, N., Li, K. C., Slicing regression: a link-free regression method, Ann. Statist. 1991, 19, 505-530.[23]Zhu, L. X., Fang, K. T, Asymptotics for kernel estimator of sliced inverse regression, Ann. Statist., 1996, 24:1053-1068.[24]Carroll, R. J., Li, K. C., Errors in variables for nonlinear regression: dimension reduction and data visualization,J. Amer. Statist. Assoc., 1992, 87: 1040-1050.[25]Rosner, B., Willett, W. C., Spiegelman, D., Correction of logistic regression relative risk estimates and confidence intervals for systematic within-person measurement error, Statist. Med., 1989, 8: 1075-1093.[26]H(a)rdle, W., Stoke, T M., Investigating smooth multiple regression by the method of average derivatives, J. Amer.Statist. Assoc., 1989, 84: 986-995.

  19. A composite-conditional-likelihood approach for gene mapping based on linkage disequilibrium in windows of marker loci.

    Science.gov (United States)

    Larribe, Fabrice; Lessard, Sabin

    2008-01-01

    A composite-conditional-likelihood (CCL) approach is proposed to map the position of a trait-influencing mutation (TIM) using the ancestral recombination graph (ARG) and importance sampling to reconstruct the genealogy of DNA sequences with respect to windows of marker loci and predict the linkage disequilibrium pattern observed in a sample of cases and controls. The method is designed to fine-map the location of a disease mutation, not as an association study. The CCL function proposed for the position of the TIM is a weighted product of conditional likelihood functions for windows of a given number of marker loci that encompass the TIM locus, given the sample configuration at the marker loci in those windows. A rare recessive allele is assumed for the TIM and single nucleotide polymorphisms (SNPs) are considered as markers. The method is applied to a range of simulated data sets. Not only do the CCL profiles converge more rapidly with smaller window sizes as the number of simulated histories of the sampled sequences increases, but the maximum-likelihood estimates for the position of the TIM remain as satisfactory, while requiring significantly less computing time. The simulations also suggest that non-random samples, more precisely, a non-proportional number of controls versus the number of cases, has little effect on the estimation procedure as well as sample size and marker density beyond some threshold values. Moreover, when compared with some other recent methods under the same assumptions, the CCL approach proves to be competitive.

  20. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...

  1. Accurate recapture identification for genetic mark-recapture studies with error-tolerant likelihood-based match calling and sample clustering.

    Science.gov (United States)

    Sethi, Suresh A; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick; Fuller, Angela; Hare, Matthew P

    2016-12-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark-recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark-recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark-recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark-recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark-recapture studies. Moderately sized SNP (64+) and MSAT (10-15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  2. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  3. A likelihood-based approach to capture-recapture estimation of demographic parameters under the robust design.

    Science.gov (United States)

    Kendall, W L; Pollock, K H; Brownie, C

    1995-03-01

    The Jolly-Seber method has been the traditional approach to the estimation of demographic parameters in long-term capture-recapture studies of wildlife and fish species. This method involves restrictive assumptions about capture probabilities that can lead to biased estimates, especially of population size and recruitment. Pollock (1982, Journal of Wildlife Management 46, 752-757) proposed a sampling scheme in which a series of closely spaced samples were separated by longer intervals such as a year. For this "robust design," Pollock suggested a flexible ad hoc approach that combines the Jolly-Seber estimators with closed population estimators, to reduce bias caused by unequal catchability, and to provide estimates for parameters that are unidentifiable by the Jolly-Seber method alone. In this paper we provide a formal modelling framework for analysis of data obtained using the robust design. We develop likelihood functions for the complete data structure under a variety of models and examine the relationship among the models. We compute maximum likelihood estimates for the parameters by applying a conditional argument, and compare their performance against those of ad hoc and Jolly-Seber approaches using simulation.

  4. Dwarf spheroidal J-factors without priors: A likelihood-based analysis for indirect dark matter searches

    CERN Document Server

    Chiappo, A; Conrad, J; Strigari, L E; Anderson, B; Sanchez-Conde, M A

    2016-01-01

    Line-of-sight integrals of the squared density, commonly called the J-factor, are essential for inferring dark matter annihilation signals. The J-factors of dark matter-dominated dwarf spheroidal satellite galaxies (dSphs) have typically been derived using Bayesian techniques, which for small data samples implies that a choice of priors constitutes a non-negligible systematic uncertainty. Here we report the development of a new fully frequentist approach to construct the profile likelihood of the J-factor. Using stellar kinematic data from several classical and ultra-faint dSphs, we derive the maximum likelihood value for the J-factor and its confidence intervals. We validate this method, in particular its bias and coverage, using simulated data from the Gaia Challenge. We find that the method possesses good statistical properties. The J-factors and their uncertainties are generally in good agreement with the Bayesian-derived values, with the largest deviations restricted to the systems with the smallest kine...

  5. Dwarf spheroidal J-factors without priors: A likelihood-based analysis for indirect dark matter searches

    Science.gov (United States)

    Chiappo, A.; Cohen-Tanugi, J.; Conrad, J.; Strigari, L. E.; Anderson, B.; Sánchez-Conde, M. A.

    2017-04-01

    Line-of-sight integrals of the squared density, commonly called the J-factor, are essential for inferring dark matter (DM) annihilation signals. The J-factors of DM-dominated dwarf spheroidal satellite galaxies (dSphs) have typically been derived using Bayesian techniques, which for small data samples implies that a choice of priors constitutes a non-negligible systematic uncertainty. Here we report the development of a new fully frequentist approach to construct the profile likelihood of the J-factor. Using stellar kinematic data from several classical and ultra-faint dSphs, we derive the maximum likelihood value for the J-factor and its confidence intervals. We validate this method, in particular its bias and coverage, using simulated data from the Gaia Challenge. We find that the method possesses good statistical properties. The J-factors and their uncertainties are generally in good agreement with the Bayesian-derived values, with the largest deviations restricted to the systems with the smallest kinematic data sets. We discuss improvements, extensions, and future applications of this technique.

  6. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  7. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    The Cryogenic Dark Matter Search (CDMS) uses cryogenically-cooled detectors made of germanium and silicon in an attempt to detect dark matter in the form of Weakly-Interacting Massive Particles (WIMPs). The expected interaction rate of these particles is on the order of 1/kg/day, far below the 200/kg/day expected rate of background interactions after passive shielding and an active cosmic ray muon veto. Our detectors are instrumented to make a simultaneous measurement of both the ionization energy and thermal energy deposited by the interaction of a particle with the crystal substrate. A comparison of these two quantities allows for the rejection of a background of electromagnetically-interacting particles at a level of better than 99.9%. The dominant remaining background at a depth of {approx} 11 m below the surface comes from fast neutrons produced by cosmic ray muons interacting in the rock surrounding the experiment. Contamination of our detectors by a beta emitter can add an unknown source of unrejected background. In the energy range of interest for a WIMP study, electrons will have a short penetration depth and preferentially interact near the surface. Some of the ionization signal can be lost to the charge contacts there and a decreased ionization signal relative to the thermal signal will cause a background event which interacts at the surface to be misidentified as a signal event. We can use information about the shape of the thermal signal pulse to discriminate against these surface events. Using a subset of our calibration set which contains a large fraction of electron events, we can characterize the expected behavior of surface events and construct a cut to remove them from our candidate signal events. This thesis describes the development of the 6 detectors (4 x 250 g Ge and 2 x 100 g Si) used in the 2001-2002 CDMS data run at the Stanford Underground Facility with a total of 119 livedays of data. The preliminary results presented are based on the

  8. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D [Case Western Reserve Univ., Cleveland, OH (United States)

    2004-05-01

    The Cryogenic Dark Matter Search (CDMS) uses cryogenically-cooled detectors made of germanium and silicon in an attempt to detect dark matter in the form of Weakly-Interacting Massive Particles (WIMPs). The expected interaction rate of these particles is on the order of 1/kg/day, far below the 200/kg/day expected rate of background interactions after passive shielding and an active cosmic ray muon veto. Our detectors are instrumented to make a simultaneous measurement of both the ionization energy and thermal energy deposited by the interaction of a particle with the crystal substrate. A comparison of these two quantities allows for the rejection of a background of electromagnetically-interacting particles at a level of better than 99.9%. The dominant remaining background at a depth of ~ 11 m below the surface comes from fast neutrons produced by cosmic ray muons interacting in the rock surrounding the experiment. Contamination of our detectors by a beta emitter can add an unknown source of unrejected background. In the energy range of interest for a WIMP study, electrons will have a short penetration depth and preferentially interact near the surface. Some of the ionization signal can be lost to the charge contacts there and a decreased ionization signal relative to the thermal signal will cause a background event which interacts at the surface to be misidentified as a signal event. We can use information about the shape of the thermal signal pulse to discriminate against these surface events. Using a subset of our calibration set which contains a large fraction of electron events, we can characterize the expected behavior of surface events and construct a cut to remove them from our candidate signal events. This thesis describes the development of the 6 detectors (4 x 250 g Ge and 2 x 100 g Si) used in the 2001-2002 CDMS data run at the Stanford Underground Facility with a total of 119 livedays of data. The preliminary results presented are based on the first use

  9. STELLS2: fast and accurate coalescent-based maximum likelihood inference of species trees from gene tree topologies.

    Science.gov (United States)

    Pei, Jingwen; Wu, Yufeng

    2017-06-15

    It is well known that gene trees and species trees may have different topologies. One explanation is incomplete lineage sorting, which is commonly modeled by the coalescent process. In multispecies coalescent, a gene tree topology is observed with some probability (called the gene tree probability) for a given species tree. Gene tree probability is the main tool for the program STELLS, which finds the maximum likelihood estimate of the species tree from the given gene tree topologies. However, STELLS becomes slow when data size increases. Recently, several fast species tree inference methods have been developed, which can handle large data. However, these methods often do not fully utilize the information in the gene trees. In this paper, we present an algorithm (called STELLS2) for computing the gene tree probability more efficiently than the original STELLS. The key idea of STELLS2 is taking some 'shortcuts' during the computation and computing the gene tree probability approximately. We apply the STELLS2 algorithm in the species tree inference approach in the original STELLS, which leads to a new maximum likelihood species tree inference method (also called STELLS2). Through simulation we demonstrate that the gene tree probabilities computed by STELLS2 and STELLS have strong correlation. We show that STELLS2 is almost as accurate in species tree inference as STELLS. Also STELLS2 is usually more accurate than several existing methods when there is one allele per species, although STELLS2 is slower than these methods. STELLS2 outperforms these methods significantly when there are multiple alleles per species. The program STELLS2 is available for download at: https://github.com/yufengwudcs/STELLS2. yufeng.wu@uconn.edu. Supplementary data are available at Bioinformatics online.

  10. Diagnostic Measures for Nonlinear Regression Models Based on Empirical Likelihood Method%非线性回归模型的经验似然诊断

    Institute of Scientific and Technical Information of China (English)

    丁先文; 徐亮; 林金官

    2012-01-01

    经验似然方法已经被广泛用于线性模型和广义线性模型.本文基于经验似然方法对非线性回归模型进行统计诊断.首先得到模型参数的极大经验似然估计;其次基于经验似然研究了三种不同的影响曲率度量;最后通过一个实际例子,说明了诊断方法的有效性.%The empirical likelihood method has been extensively applied to linear regression and generalized linear regression models. In this paper, the diagnostic measures for nonlinear regression models are studied based on the empirical likelihood method. First, the maximum empirical likelihood estimate of the parameters are obtained. Then, three different measures of influence curvatures are studied. Last, real data analysis are given to illustrate the validity of statistical diagnostic measures.

  11. Maximum likelihood model based on minor allele frequencies and weighted Max-SAT formulation for haplotype assembly.

    Science.gov (United States)

    Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser

    2014-06-07

    Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results.

  12. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  13. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  14. [Effects of attitude formation, persuasive message, and source expertise on attitude change: an examination based on the Elaboration Likelihood Model and the Attitude Formation Theory].

    Science.gov (United States)

    Nakamura, M; Saito, K; Wakabayashi, M

    1990-04-01

    The purpose of this study was to investigate how attitude change is generated by the recipient's degree of attitude formation, evaluative-emotional elements contained in the persuasive messages, and source expertise as a peripheral cue in the persuasion context. Hypotheses based on the Attitude Formation Theory of Mizuhara (1982) and the Elaboration Likelihood Model of Petty and Cacioppo (1981, 1986) were examined. Eighty undergraduate students served as subjects in the experiment, the first stage of which involving manipulating the degree of attitude formation with respect to nuclear power development. Then, the experimenter presented persuasive messages with varying combinations of evaluative-emotional elements from a source with either high or low expertise on the subject. Results revealed a significant interaction effect on attitude change among attitude formation, persuasive message and the expertise of the message source. That is, high attitude formation subjects resisted evaluative-emotional persuasion from the high expertise source while low attitude formation subjects changed their attitude when exposed to the same persuasive message from a low expertise source. Results exceeded initial predictions based on the Attitude Formation Theory and the Elaboration Likelihood Model.

  15. Gender variation in self-reported likelihood of HIV infection in comparison with HIV test results in rural and urban Nigeria

    Directory of Open Access Journals (Sweden)

    Fagbamigbe Adeniyi F

    2011-12-01

    Full Text Available Abstract Background Behaviour change which is highly influenced by risk perception is a major challenge that HIV prevention efforts need to confront. In this study, we examined the validity of self-reported likelihood of HIV infection among rural and urban reproductive age group Nigerians. Methods This is a cross-sectional study of a nationally representative sample of Nigerians. We investigated the concordance between self-reported likelihood of HIV and actual results of HIV test. Multivariate logistic regression analysis was used to assess whether selected respondents' characteristics affect the validity of self-reports. Results The HIV prevalence in the urban population was 3.8% (3.1% among males and 4.6% among females and 3.5% in the rural areas (3.4% among males and 3.7% among females. Almost all the respondents who claimed they have high chances of being infected with HIV actually tested negative (91.6% in urban and 97.9% in rural areas. In contrast, only 8.5% in urban areas and 2.1% in rural areas, of those who claimed high chances of been HIV infected were actually HIV positive. About 2.9% and 4.3% from urban and rural areas respectively tested positive although they claimed very low chances of HIV infection. Age, gender, education and residence are factors associated with validity of respondents' self-perceived risk of HIV infection. Conclusion Self-perceived HIV risk is poorly sensitive and moderately specific in the prediction of HIV status. There are differences in the validity of self-perceived risk of HIV across rural and urban populations.

  16. 基于最大似然估计的加权质心定位算法%Weighted Centroid Localization Algorithm Based on Maximum Likelihood Estimation

    Institute of Scientific and Technical Information of China (English)

    卢先领; 夏文瑞

    2016-01-01

    In solving the problem of localizing nodes in a wireless sensor network,we propose a weighted centroid localization algorithm based on maximum likelihood estimation,with the specific goal of solving the problems of big received signal strength indication (RSSI)ranging error and low accuracy of the centroid localization algorithm.Firstly,the maximum likelihood estimation between the estimated distance and the actual distance is calculated as weights.Then,a parameter k is introduced to optimize the weights between the anchor nodes and the unknown nodes in the weight model.Finally,the locations of the unknown nodes are calculated and modified by using the proposed algorithm.The simulation results show that the weighted centroid algorithm based on the maximum likelihood estimation has the features of high localization accuracy and low cost,and has better performance compared with the inverse distance-based algorithm and the inverse RSSI-based algo-rithm.Hence,the proposed algorithm is more suitable for the indoor localization of large areas.%为解决无线传感器网络中节点自身定位问题,针对接收信号强度指示(received signal strength indication,RSSI)测距误差大和质心定位算法精度低的问题,提出一种基于最大似然估计的加权质心定位算法。首先通过计算将估计距离与实际距离之间的最大似然估计值作为权值,然后在权值模型中,引进一个参数k优化未知节点周围锚节点分布,最后计算出未知节点的位置并加以修正。仿真结果表明,基于最大似然估计的加权质心算法具有定位精度高和成本低的特点,优于基于距离倒数的质心加权和基于RSSI倒数的质心加权算法,适用于大面积的室内定位。

  17. Parallel Likelihood Function Evaluation on Heterogeneous Many-core Systems

    CERN Document Server

    Jarp, Sverre; Leduc, Julien; Nowak, Andrzej; Sneen Lindal, Yngve

    2011-01-01

    This paper describes a parallel implementation that allows the evaluations of the likelihood function for data analysis methods to run cooperatively on heterogeneous computational devices (i.e. CPU and GPU) belonging to a single computational node. The implementation is able to split and balance the workload needed for the evaluation of the likelihood function in corresponding sub-workloads to be executed in parallel on each computational device. The CPU parallelization is implemented using OpenMP, while the GPU implementation is based on OpenCL. The comparison of the performance of these implementations for different configurations and different hardware systems are reported. Tests are based on a real data analysis carried out in the high energy physics community.

  18. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  19. Grip-Pattern Recognition in Smart Gun Based on Likelihood-Ratio Classifier and Support Vector Machine

    NARCIS (Netherlands)

    Shang, Xiaoxin; Veldhuis, Raymond N.J.

    2008-01-01

    In the biometric verification system of a smart gun, the rightful user of a gun is recognized based on grip-pattern recognition. It was found that the verification performance of this system degrades strongly when the data for training and testing have been recorded in different sessions with a time

  20. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  1. A user-operated audiometry method based on the maximum likelihood principle and the two-alternative forced-choice paradigm

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben

    2014-01-01

    response criteria. User-operated audiometry was developed as an alternative to traditional audiometry for research purposes among musicians. Design: Test-retest reliability of the user-operated audiometry system was evaluated and the user-operated audiometry system was compared with traditional audiometry......Objective: To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating....... Study sample: Test-retest reliability of user-operated 2AFC audiometry was tested with 38 naïve listeners. User-operated 2AFC audiometry was compared to traditional audiometry in 41 subjects. Results: The repeatability of user-operated 2AFC audiometry was comparable to traditional audiometry...

  2. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes.

  3. Threshold Setting for Likelihood Function for Elasticity-Based Tissue Classification of Arterial Walls by Evaluating Variance in Measurement of Radial Strain

    Science.gov (United States)

    Tsuzuki, Kentaro; Hasegawa, Hideyuki; Kanai, Hiroshi; Ichiki, Masataka; Tezuka, Fumiaki

    2008-05-01

    Pathologic changes in arterial walls significantly influence their mechanical properties. We have developed a correlation-based method, the phased tracking method [H. Kanai et al.: IEEE Trans. Ultrason. Ferroelectr. Freq. Control 43 (1996) 791], for measurement of the regional elasticity of the arterial wall. Using this method, elasticity distributions of lipids, blood clots, fibrous tissue, and calcified tissue were measured in vitro by experiments on excised arteries (mean±SD: lipids 89±47 kPa, blood clots 131 ±56 kPa, fibrous tissue 1022±1040 kPa, calcified tissue 2267 ±1228 kPa) [H. Kanai et al.: Circulation 107 (2003) 3018; J. Inagaki et al.: Jpn. J. Appl. Phys. 44 (2005) 4593]. It was found that arterial tissues can be classified into soft tissues (lipids and blood clots) and hard tissues (fibrous tissue and calcified tissue) on the basis of their elasticity. However, there are large overlaps between elasticity distributions of lipids and blood clots and those of fibrous tissue and calcified tissue. Thus, it was difficult to differentiate lipids from blood clots and fibrous tissue from calcified tissue by simply thresholding elasticity value. Therefore, we previously proposed a method by classifying the elasticity distribution in each region of interest (ROI) (not a single pixel) in an elasticity image into lipids, blood clots, fibrous tissue, or calcified tissue based on a likelihood function for each tissue [J. Inagaki et al.: Jpn. J. Appl. Phys. 44 (2006) 4732]. In our previous study, the optimum size of an ROI was determined to be 1,500 µm in the arterial radial direction and 1,500 µm in the arterial longitudinal direction [K. Tsuzuki et al.: Ultrasound Med. Biol. 34 (2008) 573]. In this study, the threshold for the likelihood function used in the tissue classification was set by evaluating the variance in the ultrasonic measurement of radial strain. The recognition rate was improved from 50 to 54% by the proposed thresholding.

  4. Population stratification in Argentina strongly influences likelihood ratio estimates in paternity testing as revealed by a simulation-based approach.

    Science.gov (United States)

    Toscanini, Ulises; Salas, Antonio; García-Magariños, Manuel; Gusmão, Leonor; Raimondi, Eduardo

    2010-01-01

    A simulation-based analysis was carried out to investigate the potential effects of population substructure in paternity testing in Argentina. The study was performed by evaluating paternity indexes (PI) calculated from different simulated pedigree scenarios and using 15 autosomal short tandem repeats (STRs) from eight Argentinean databases. The results show important statistically significant differences between PI values depending on the dataset employed. These differences are more dramatic when considering Native American versus urban populations. This study also indicates that the use of Fst to correct for the effect of population stratification on PI might be inappropriate because it cannot account for the particularities of single paternity cases.

  5. Statistical Inference for Autoregressive Conditional Duration Models Based on Empirical Likelihood%基于经验似然的自回归条件久期模型的统计推断

    Institute of Scientific and Technical Information of China (English)

    韩玉; 金应华; 吴武清

    2013-01-01

    利用经验似然方法对自回归条件久期(ACD)模型参数进行统计检验,给出了自回归条件久期模型参数的经验似然比统计量,并证明了该统计量渐近服从x2-分布.数值模拟结果表明,经验似然方法优于拟似然方法.%This paper solves the statistical test problem of an autoregressive conditional duration (ACD) models based on an empirical likelihood method. We construct the log empirical likelihood ratio statistics for the parameters of ACD model, it is showed that the proposed statistics asymptotically follows an χ2-distribution. A numerical simulation demonstrates that the performance of the empirical likelihood method are better than that of the quasi-likelihood method.

  6. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. RAPID COMMUNICATION: Wavelet transform-based prediction of the likelihood of successful defibrillation for patients exhibiting ventricular fibrillation

    Science.gov (United States)

    Watson, J. N.; Addison, P. S.; Clegg, G. R.; Steen, P. A.; Robertson, C. E.

    2005-10-01

    We report on an improved method for the prediction of the outcome from electric shock therapy for patients in ventricular fibrillation: the primary arrhythmia associated with sudden cardiac death. Our wavelet transform-based marker, COP (cardioversion outcome prediction), is compared to three other well-documented shock outcome predictors: median frequency (MF) of fibrillation, spectral energy (SE) and AMSA (amplitude spectrum analysis). Optimum specificities for sensitivities around 95% for the four reported methods are 63 ± 4% at 97 ± 2% (COP), 42 ± 15% at 90 ± 7% (MF), 12 ± 3% at 94 ± 5% (SE) and 56 ± 5% at 94 ± 5% (AMSA), with successful defibrillation defined as the rapid (30 s) spontaneous circulation. This marked increase in performance by COP at specificity values around 95%, required for implementation of the technique in practice, is achieved by its enhanced ability to partition pertinent information in the time-frequency plane. COP therefore provides an optimal index for the identification of patients for whom shocking would be futile and for whom an alternative therapy should be considered.

  8. Orders of magnitude extension of the effective dynamic range of TDC-based TOFMS data through maximum likelihood estimation.

    Science.gov (United States)

    Ipsen, Andreas; Ebbels, Timothy M D

    2014-10-01

    In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time-tasks that may be best addressed through engineering efforts.

  9. Neural signatures of social conformity: A coordinate-based activation likelihood estimation meta-analysis of functional brain imaging studies.

    Science.gov (United States)

    Wu, Haiyan; Luo, Yi; Feng, Chunliang

    2016-12-01

    People often align their behaviors with group opinions, known as social conformity. Many neuroscience studies have explored the neuropsychological mechanisms underlying social conformity. Here we employed a coordinate-based meta-analysis on neuroimaging studies of social conformity with the purpose to reveal the convergence of the underlying neural architecture. We identified a convergence of reported activation foci in regions associated with normative decision-making, including ventral striatum (VS), dorsal posterior medial frontal cortex (dorsal pMFC), and anterior insula (AI). Specifically, consistent deactivation of VS and activation of dorsal pMFC and AI are identified when people's responses deviate from group opinions. In addition, the deviation-related responses in dorsal pMFC predict people's conforming behavioral adjustments. These are consistent with current models that disagreement with others might evoke "error" signals, cognitive imbalance, and/or aversive feelings, which are plausibly detected in these brain regions as control signals to facilitate subsequent conforming behaviors. Finally, group opinions result in altered neural correlates of valuation, manifested as stronger responses of VS to stimuli endorsed than disliked by others.

  10. Orders of Magnitude Extension of the Effective Dynamic Range of TDC-Based TOFMS Data Through Maximum Likelihood Estimation

    Science.gov (United States)

    Ipsen, Andreas; Ebbels, Timothy M. D.

    2014-10-01

    In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.

  11. 一种基于SVM重采样的似然粒子滤波算法%Likelihood particle filter based on support vector machines resampling

    Institute of Scientific and Technical Information of China (English)

    蒋蔚; 伊国兴; 曾庆双

    2011-01-01

    To cope with state estimation problems of nonlinear/non-Gaussian dynamic systems with weak measurement noise, an improved likelihood particle filter(LPF) algorithm is proposed based on support vector machines(SVM) resampling.Firstly, the algorithm employs the likelihood as proposal distribution and takes account of the most recent observation, so it is comparably closer to the posterior than the transition prior used as proposal. Then, the posterior probability density model of the states is estimated by SVM with current particles and their importance weights during iteration. Finally, after resampling the new particles from the given density model, degeneration problem is solved effectively by these diversiform particles.The simulation results show the feasibility and effectiveness of the algorithm.%针对弱观测噪声条件下非线性、非高斯动态系统的滤波问题,提出一种基于支持向量机的似然粒子滤波算法.首先,采用似然函数作为提议分布,融入最新的观测信息,比采用先验转移密度的一般粒子滤波算法更接近状态的真实后验密度;然后,利用当前粒子及其权值,使用支持向量机估计出状态的后验概率密度模型;最后,根据此模型重采样更新粒子集,有效地克服粒子退化现象并提高状态估计精度.仿真结果表明了所提出算法的可行性和有效性.

  12. The Distribution Model of CNC System to Failure Based on Maximum Likelihood Estimation%基于极大似然估计的数控系统故障分布模型

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    威布尔分布被广泛用于可靠性工程和寿命数据的分析中。针对两参数威布尔分布,建立基于极大似然法的参数估计模型,采用二阶收敛 Newton-Raphson 迭代法求解威布尔分布的尺寸参数和形状参数。迭代求解过程中,利用 Matlab 图形,初步选取似然函数曲线在零值点附近的区域作为初始值的区间,并根据 Newton-Raphson 迭代法收敛的充分条件进一步确定迭代初值的选取范围。通过 matlab 绘制迭代趋势三维图,证明与迭代计算结果相符。通过比较,证实本参数估计模型和 Newton-Raphson 迭代求解法更加精确有效。%Owing to the fact that the Weibull distribution is frequently applied for reliability engineering and lifespan data analysis,the paper established the parameter estimation model using maximum likelihood estimation for the dual-parametric Weibull distribution.it then used second-order convergent Newton-Raphson iteration method to solve the MLE of two-parameter Weibull distribution,which has scale param-eter and shape parameter.In the iteration process,the area around the zero point of likelihood function curve was preliminarily selected as the range of the initial value based on the likelihood function image which was plotted by Matlab,and according to the sufficient conditions for the convergence of Newton-Raphson iteration method to further determine the scope of the iterative initial value.Iteration trend three-dimensional image which was plotted by Matlab proves to be consistent with the results of iterative calcula-tion.Finally,by comparison,this parameter estimation model and the Newton-Raphson iterative solution method were proved to be more accurate and efficient.

  13. Inference in HIV dynamics models via hierarchical likelihood

    OpenAIRE

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...

  14. Product of Likelihood Ratio Scores Fusion of Dynamic Face, Text Independent Speech and On-line Signature Based Biometrics Verification Application Systems

    Directory of Open Access Journals (Sweden)

    Mohamed SOLTANE

    2015-09-01

    Full Text Available In this paper, the use of finite Gaussian mixture modal (GMM tuned using Expectation Maximization (EM estimating algorithms for score level data fusion is proposed. Automated biometric systems for human identification measure a “signature” of the human body, compare the resulting characteristic to a database, and render an application dependent decision. These biometric systems for personal authentication and identification are based upon physiological or behavioral features which are typically distinctive, Multi-biometric systems, which consolidate information from multiple biometric sources, are gaining popularity because they are able to overcome limitations such as non-universality, noisy sensor data, large intra-user variations and susceptibility to spoof attacks that are commonly encountered in mono modal biometric systems. Simulation result show that finite mixture modal (GMM is quite effective in modelling the genuine and impostor score densities, fusion based the product of Likelihood Ratio achieves a significant performance on eNTERFACE 2005 multi-biometric database based on dynamic face, on-line signature and text independent speech modalities.

  15. Using Data to Tune Nearshore Dynamics Models: A Bayesian Approach with Parametric Likelihood

    CERN Document Server

    Balci, Nusret; Venkataramani, Shankar C

    2013-01-01

    We propose a modification of a maximum likelihood procedure for tuning parameter values in models, based upon the comparison of their output to field data. Our methodology, which uses polynomial approximations of the sample space to increase the computational efficiency, differs from similar Bayesian estimation frameworks in the use of an alternative likelihood distribution, is shown to better address problems in which covariance information is lacking, than its more conventional counterpart. Lack of covariance information is a frequent challenge in large-scale geophysical estimation. This is the case in the geophysical problem considered here. We use a nearshore model for long shore currents and observational data of the same to show the contrast between both maximum likelihood methodologies. Beyond a methodological comparison, this study gives estimates of parameter values for the bottom drag and surface forcing that make the particular model most consistent with data; furthermore, we also derive sensitivit...

  16. Augmented Likelihood Image Reconstruction.

    Science.gov (United States)

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  17. Use of Drop-In Clinic Versus Appointment-Based Care for LGBT Youth: Influences on the Likelihood to Access Different Health-Care Structures.

    Science.gov (United States)

    Newman, Bernie S; Passidomo, Kim; Gormley, Kate; Manley, Alecia

    2014-06-01

    The structure of health-care service delivery can address barriers that make it difficult for lesbian, gay, bisexual, and transgender (LGBT) adolescents to use health services. This study explores the differences among youth who access care in one of two service delivery structures in an LGBT health-care center: the drop-in clinic or the traditional appointment-based model. Analysis of 578 records of LGBT and straight youth (aged 14-24) who accessed health care either through a drop-in clinic or appointment-based care within the first year of offering the drop-in clinic reveals patterns of use when both models are available. We studied demographic variables previously shown to be associated with general health-care access to determine how each correlated with a tendency to use the drop-in structure versus routine appointments. Once the covariates were identified, we conducted a logistic regression analysis to identify its association with likelihood of using the drop-in clinic. Insurance status, housing stability, education, race, and gender identity were most strongly associated with the type of clinic used. Youth who relied on Medicaid, those in unstable housing, and African Americans were most likely to use the drop-in clinic. Transgender youth and those with higher education were more likely to use the appointment-based clinic. Although sexual orientation and HIV status were not related to type of clinic used, youth who were HIV positive used the appointment-based clinic more frequently. Both routes to health care served distinct populations who often experience barriers to accessible, affordable, and knowledgeable care. Further study of the factors related to accessing health care may clarify the extent to which drop-in hours in a youth-friendly context may increase the use of health care by the most socially marginalized youth.

  18. Phylogeny of the bee genus Halictus (Hymenoptera: halictidae) based on parsimony and likelihood analyses of nuclear EF-1alpha sequence data.

    Science.gov (United States)

    Danforth, B N; Sauquet, H; Packer, L

    1999-12-01

    We investigated higher-level phylogenetic relationships within the genus Halictus based on parsimony and maximum likelihood (ML) analysis of elongation factor-1alpha DNA sequence data. Our data set includes 41 OTUs representing 35 species of halictine bees from a diverse sample of outgroup genera and from the three widely recognized subgenera of Halictus (Halictus s.s., Seladonia, and Vestitohalictus). We analyzed 1513 total aligned nucleotide sites spanning three exons and two introns. Equal-weights parsimony analysis of the overall data set yielded 144 equally parsimonious trees. Major conclusions supported in this analysis (and in all subsequent analyses) included the following: (1) Thrincohalictus is the sister group to Halictus s.l., (2) Halictus s.l. is monophyletic, (3) Vestitohalictus renders Seladonia paraphyletic but together Seladonia + Vestitohalictus is monophyletic, (4) Michener's Groups 1 and 3 are monophyletic, and (5) Michener's Group 1 renders Group 2 paraphyletic. In order to resolve basal relationships within Halictus we applied various weighting schemes under parsimony (successive approximations character weighting and implied weights) and employed ML under 17 models of sequence evolution. Weighted parsimony yielded conflicting results but, in general, supported the hypothesis that Seladonia + Vestitohalictus is sister to Michener's Group 3 and renders Halictus s.s. paraphyletic. ML analyses using the GTR model with site-specific rates supported an alternative hypothesis: Seladonia + Vestitohalictus is sister to Halictus s.s. We mapped social behavior onto trees obtained under ML and parsimony in order to reconstruct the likely historical pattern of social evolution. Our results are unambiguous: the ancestral state for the genus Halictus is eusociality. Reversal to solitary behavior has occurred at least four times among the species included in our analysis. Copyright 1999 Academic Press.

  19. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    Science.gov (United States)

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.

  20. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results......To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  1. A quantum framework for likelihood ratios

    CERN Document Server

    Bond, Rachael L; Ormerod, Thomas C

    2015-01-01

    The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...

  2. Maximum Likelihood DOA Estimator based on Grid Hill Climbing Method%基于网格爬山法的最大似然DOA估计算法

    Institute of Scientific and Technical Information of China (English)

    艾名舜; 马红光

    2011-01-01

    The maximum likelihood estimator for direction of arrival ( DOA) possesses optimum theoretical performance as well as high computational complexity. Taking the estimation as an optimization problem of high-dimension nonlinear function, a novel algorithm has been proposed to reduce the computational load of that. At the beginning, the beamforming method is adopted to estimate the spatial spectrum roughly, and a group of initial solutions that obey the law of the "pre-estimated distribution " are obtained according to the information of the spatial spectrum, and the initial sulotions will locate in the local attractive area of the global optimum solution in great probability. Then, one of the soultions in this group who possesses the maximum fitness is selected to be the initial point of the local search. Grid Hill-climbing Method (GHCM) is a kinds of local search methods that takes a grid as a search unit, which is an improved version of the traditional Hill-climbing Method, and the GHCM is more efficient and stable than the traditional one, so it is a-dopted to obtain the global optimum solution at last. The proposed algorithm can obtain accurate DOA estimation with lower computational cost, and the simulation shows that the propoesd algorithm is more efficient than the maximum likelihood DOA estimator based on PSO .%最大似然波达方向(DOA)估计具有最优的理论性能,但是存在计算量过大的问题.为了降低最大似然DOA估计的计算量,将参数估计转化为高维非线性函数的优化问题,并提出了一种新的优化算法.首先利用波束形成法对空间谱进行预估计并根据空间谱信息构造一组满足“预估分布”的初始解,这组初始解以较大概率落在全局最优解的局部吸引域中.然后将其中适应度最大的一个初始解作为局部搜索的起点.网格爬山法是一种以网格为单元的局部搜索方法,比传统爬山法更加高效和稳定,因此采用该方法获取全局

  3. Empirical Likelihood-Based Inference with Missing and Censored Data%含有截断和缺失数据的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    郑明; 杜玮

    2008-01-01

    本文将经验似然的方法应用到同时包含截断和缺失数据的情况.通过定义调整后的经验似然比,证明它服从x2分布.利用随机模拟,比较经验似然和正态方法的优劣.结果发现经验似然方法在很多情况下都优于正态方法.%In this paper, we investigate how to apply the empirical likelihood method to the mean in the presence of censoring and missing. We show that an adjusted empirical likelihood statistic follows a chi-square distribution. Some simulation studies are presented to compare the empirical likelihood method with the normal method. These results indicate that the empirical likelihood method works better than or equally to the normal method.

  4. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  5. Full likelihood analysis of genetic risk with variable age at onset disease--combining population-based registry data and demographic information.

    Directory of Open Access Journals (Sweden)

    Janne Pitkäniemi

    Full Text Available BACKGROUND: In genetic studies of rare complex diseases it is common to ascertain familial data from population based registries through all incident cases diagnosed during a pre-defined enrollment period. Such an ascertainment procedure is typically taken into account in the statistical analysis of the familial data by constructing either a retrospective or prospective likelihood expression, which conditions on the ascertainment event. Both of these approaches lead to a substantial loss of valuable data. METHODOLOGY AND FINDINGS: Here we consider instead the possibilities provided by a Bayesian approach to risk analysis, which also incorporates the ascertainment procedure and reference information concerning the genetic composition of the target population to the considered statistical model. Furthermore, the proposed Bayesian hierarchical survival model does not require the considered genotype or haplotype effects be expressed as functions of corresponding allelic effects. Our modeling strategy is illustrated by a risk analysis of type 1 diabetes mellitus (T1D in the Finnish population-based on the HLA-A, HLA-B and DRB1 human leucocyte antigen (HLA information available for both ascertained sibships and a large number of unrelated individuals from the Finnish bone marrow donor registry. The heterozygous genotype DR3/DR4 at the DRB1 locus was associated with the lowest predictive probability of T1D free survival to the age of 15, the estimate being 0.936 (0.926; 0.945 95% credible interval compared to the average population T1D free survival probability of 0.995. SIGNIFICANCE: The proposed statistical method can be modified to other population-based family data ascertained from a disease registry provided that the ascertainment process is well documented, and that external information concerning the sizes of birth cohorts and a suitable reference sample are available. We confirm the earlier findings from the same data concerning the HLA-DR3

  6. Analytic Methods for Cosmological Likelihoods

    OpenAIRE

    Taylor, A. N.; Kitching, T. D.

    2010-01-01

    We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...

  7. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    SUN ShuGuang; ZHANG XinSheng

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  8. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  9. Empirical likelihood inference for diffusion processes with jumps

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.

  10. Maximum likelihood channel estimation based on nonlinear filter%基于非线性滤波器的最大似然信道估计

    Institute of Scientific and Technical Information of China (English)

    沈壁川; 郑建宏; 申敏

    2008-01-01

    For long finite channel impulse response,accurate maximum likelihood channel estimation is computationally high cost due to high dimension of parameter space,and approximate approaches are usually adopted.By utilizing the suppression of noise and extraction of signal of the nonlinear Teager-Kaiser filter,a likelihood ratio of channel estimation is defined to represent the probability distribution of ehannel parameters.Maximization of this likelihood funetion 1eads to initially searching the extrema of path delays and then the complex attenuation.Computer simulation iS conducted and the results show periormance improvements of ioint detection as compared to the non-likelihood approach.%在有限信道冲激响应较长的情况,由于待估计参数空间的高维数,准确计算最大似然信道估计的复杂度较高,在实际应用中通常采用近似的方法.利用非线性Teager-Kaiser滤波器在抑制噪声的同时可以有效提取信号的特征,定义了一个表征信道参数概率分布的似然比,对该似然函数的最大化是首先得到路径延迟的极值,然后求得复路径衰耗.计算机仿真结果表明,与非似然方法相比,采用该似然函数方法能使联合检测性能得到提高.

  11. 基于蚁群算法的最大似然方位估计快速算法%Fast maximum likelihood direction-of-arrival estimator based on ant colony optimization

    Institute of Scientific and Technical Information of China (English)

    焦亚萌; 黄建国; 侯云山

    2011-01-01

    针对最大似然(maximum likelihood,ML)方位估计方法多维非线性搜索计算量大的问题,将连续空间蚁群算法与最大似然算法相结合,提出基于蚁群算法的最大似然(ant colony optimization based maximum likelihood,ACOML)估计新方法.该方法将传统蚁群算法中的信息量留存过程拓展为连续空间的信息量高斯核概率密度函数,得到最大似然方位估计的非线性全局最优解.仿真结果表明,ACOML方法保持了原最大似然方位估计方法算法的优良估计性能,而计算量只是最大似然方法的1/15.%A new maximum likelihood direction of arrival (DOA) estimator based on ant colony optimization (ACOML) is proposed to reduce the computational complexity of multi-dimensional nonlinear existing in maximum likelihood (ML) DOA estimator. By extending the pheromone remaining process in the traditional ant colony optimization into a pheromone Gaussian kernel probability distribution function in continuous space, ant colony optimization is combined with maximum likelihood method to lighten computation burden. The simulations show that ACOML provides a similar performance to that achieved by the original ML method, but its computational cost is only 1/15 of ML.

  12. Comparison of Estimators for Exponentiated Inverted Weibull Distribution Based on Grouped Data Amal

    Directory of Open Access Journals (Sweden)

    S. Hassan

    2014-04-01

    Full Text Available In many situations, instead of complete sample, data is available only in grouped form. This paper presents estimation of population parameters for the exponentiated inverted Weibull distribution based on grouped data with equi and unequi-spaced grouping. Several alternative estimation schemes, such as, the method of maximum likelihood, least lines, least squares, minimum chi-square, and modified minimum chi-square are considered. Since the different methods of estimation didn't provide closed form solution, thus numerical procedure is applied. The root mean squared error resulting estimators used as comparison criterion to measure both the accuracy and the precision for each parameter.

  13. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan

    2017-07-13

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  14. Paired Comparisons-based Interactive Differential Evolution

    CERN Document Server

    Takagi, Hideyuki

    2009-01-01

    We propose Interactive Differential Evolution (IDE) based on paired comparisons for reducing user fatigue and evaluate its convergence speed in comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User interface and convergence performance are two big keys for reducing Interactive Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE, users of the proposed IDE and tournament IGA do not need to compare whole individuals each other but compare pairs of individuals, which largely decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate another factor, IEC convergence performance, using IEC simulators and show that our proposed IDE converges significantly faster than IGA and tournament IGA, i.e. our proposed one is superior to others from both user interface and convergence performance points of view.

  15. Maximum likelihood identification of aircraft stability and control derivatives

    Science.gov (United States)

    Mehra, R. K.; Stepner, D. E.; Tyler, J. S.

    1974-01-01

    Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.

  16. In vitro predictability of drug-drug interaction likelihood of P-glycoprotein-mediated efflux of dabigatran etexilate based on [I]2/IC50 threshold.

    Science.gov (United States)

    Kishimoto, Wataru; Ishiguro, Naoki; Ludwig-Schwellinger, Eva; Ebner, Thomas; Schaefer, Olaf

    2014-02-01

    Dabigatran etexilate, an oral, reversible, competitive, and direct thrombin inhibitor, is an in vitro and in vivo substrate of P-glycoprotein (P-gp). Dabigatran etexilate was proposed as an in vivo probe substrate for intestinal P-gp inhibition in a recent guidance on drug-drug interactions (DDI) from the European Medicines Agency (EMA) and the Food and Drug Administration (FDA). We conducted transcellular transport studies across Caco-2 cell monolayers with dabigatran etexilate in the presence of various P-gp inhibitors to examine how well in vitro IC50 data, in combination with mathematical equations provided by regulatory guidances, predict DDI likelihood. From a set of potential P-gp inhibitors, clarithromycin, cyclosporin A, itraconazole, ketoconazole, quinidine, and ritonavir inhibited P-gp-mediated transport of dabigatran etexilate over a concentration range that may hypothetically occur in the intestine. IC50 values of P-gp inhibitors for dabigatran etexilate transport were comparable to those of digoxin, a well established in vitro and in vivo P-gp substrate. However, IC50 values varied depending whether they were calculated from efflux ratios or permeability coefficients. Prediction of DDI likelihood of P-gp inhibitors using IC50 values, the hypothetical concentration of P-gp inhibitors, and the cut-off value recommended by both the FDA and EMA were in line with the DDI occurrence in clinical studies with dabigatran etexilate. However, it has to be kept in mind that validity of the cut-off criteria proposed by the FDA and EMA depends on in vitro experimental systems and the IC50-calculation methods that are employed, as IC50 values are substantially influenced by these factors.

  17. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  18. Lessons about likelihood functions from nuclear physics

    CERN Document Server

    Hanson, Kenneth M

    2007-01-01

    Least-squares data analysis is based on the assumption that the normal (Gaussian) distribution appropriately characterizes the likelihood, that is, the conditional probability of each measurement d, given a measured quantity y, p(d | y). On the other hand, there is ample evidence in nuclear physics of significant disagreements among measurements, which are inconsistent with the normal distribution, given their stated uncertainties. In this study the histories of 99 measurements of the lifetimes of five elementary particles are examined to determine what can be inferred about the distribution of their values relative to their stated uncertainties. Taken as a whole, the variations in the data are somewhat larger than their quoted uncertainties would indicate. These data strongly support using a Student t distribution for the likelihood function instead of a normal. The most probable value for the order of the t distribution is 2.6 +/- 0.9. It is shown that analyses based on long-tailed t-distribution likelihood...

  19. Software Testing Method Based on Model Comparison

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  20. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  1. Conditional likelihood inference in generalized linear mixed models.

    OpenAIRE

    Sartori, Nicola; Severini , T.A

    2002-01-01

    Consider a generalized linear model with a canonical link function, containing both fixed and random effects. In this paper, we consider inference about the fixed effects based on a conditional likelihood function. It is shown that this conditional likelihood function is valid for any distribution of the random effects and, hence, the resulting inferences about the fixed effects are insensitive to misspecification of the random effects distribution. Inferences based on the conditional likelih...

  2. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    Science.gov (United States)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  3. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  4. 基于多维名义模型的加权似然估计方法%Weighted Likelihood Estimation Method Based on the Multidimensional Nominal Response Model

    Institute of Scientific and Technical Information of China (English)

    孙珊珊; 陶剑

    2008-01-01

    The Monte Carlo study evaluates the relative accuracy of Warm's (1989) weighted likelihood estimate (WLE) compared to the maximum likelihood estimate (MLE) using the nominal response model. And the results indicate that WLE was more accurate than MLE.

  5. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  6. 基于极大似然估计的TMT三镜轴系装调%TMT third-mirror shafting system alignment based on maximum likelihood estimation

    Institute of Scientific and Technical Information of China (English)

    安其昌; 张景旭; 孙敬伟

    2013-01-01

    In order to complete the testing and alignment of TMT third mirror shafting, the maximum likelihood estimation was introduced. Firstly, two intersecting planes were used to identify a space line. Then, considering the noise of the measured data, maximum likelihood estimation was made use of to estimate TMT third mirror shafting parameters. And in MATLAB, which produced a training set with Gaussian white noise, the angle of collection axis and ideal axis from 6.29" to the optimized 5.24" was reduced, with optimization of 17%. Lastly, Vantage Laser Tracker was made the testing tool for TMT large shafting. Using optimization before, the TMT third mirror shafting residuals error was drawn to 2.9", which was less than the TMT indicator of 4". This paper will do good to TMT third mirror shafting alignment, and raise a real-time method to other large diameter optical system shafting alignment.%为了完成TMT三镜轴系的检测与装调,引入了极大似然估计来完成TMT三镜轴系装调。首先提出利用两过定点的相交拟合平面辨识一条空间直线;之后考虑到测量数据噪声类型的不确定性,提出使用极大似然估计对三镜机械轴位置参数进行辨识,并在MATLAB产生的一组带有高斯白噪声的训练集上对两个拟合平面所过定点位置进行优化,拟合轴线与理想轴线的夹角由优化前的6.29"降低为优化后的5.24",优化量为17%;然后选定Vantage激光跟踪仪作为TMT大型轴系的检验工具,利用之前的优化方案,得出在该方法下TMT三镜轴系的定位残差为2.9",小于TMT招标方提出的指标4"。文中将极大似然线性拟合用于TMT三镜轴系装调,提出了一种实时性强、适用范围广的方法,对于其他大口径光学系统轴系的检测与调节也有很大的借鉴意义。

  7. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  8. Phylogeny of the cycads based on multiple single-copy nuclear genes: congruence of concatenated parsimony, likelihood and species tree inference methods.

    Science.gov (United States)

    Salas-Leiva, Dayana E; Meerow, Alan W; Calonje, Michael; Griffith, M Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W; Lewis, Carl E; Namoff, Sandra

    2013-11-01

    Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree-species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree-species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia-Lepidozamia-Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial classification of Zamiaceae.

  9. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are, respectiv......We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  10. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  11. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...... are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model...

  12. Section 9: Ground Water - Likelihood of Release

    Science.gov (United States)

    HRS training. the ground water pathway likelihood of release factor category reflects the likelihood that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.

  13. A Comparison Between the Empirical Logistic Regression Method and the Maximum Likelihood Estimation Method%经验 logistic 回归方法与最大似然估计方法的对比分析

    Institute of Scientific and Technical Information of China (English)

    张婷婷; 高金玲

    2014-01-01

    针对logistic回归中最大似然估计法的迭代算法求解困难的问题,从理论和实例运用的两个角度寻找到一种简便估计法,即经验logistic回归。分析结果表明,在样本容量很大的情况下经验logistic回归方法比最大似然估计方法更具备良好的科学性和实用性,并且两种方法对同一组资料的分析结果一致,而经验logistic回归更简单,此结果对于实际工作者来说非常重要。%In this paper , the empirical logistic regression method and the maximum likelihood estimation method were analyzed in detail by illustrating in theory , and the two methods were compared with correlation a-nalysis from scientific and practical .Analysis results show that , under the condition of the sample size is very big , empirical logistic regression method is better than maximum likelihood estimation method in respect of scientific and practical , at the same time , they are the same consequence .However , empirical logistic regression method is easier than maximum likelihood estimation method , which is very important to practical workers .

  14. 离散化发现过程模型的极大似然估计与贝叶斯估计之对比%Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  15. The Change Point Identification of Poisson Process Based on the GeneralizedLikelihood Ratio%基于广义似然比的泊松过程变点识别

    Institute of Scientific and Technical Information of China (English)

    赵俊

    2012-01-01

    在广义似然比(General Likelihood Ratio.GLR)的基础上,作者提出了参数未知条件下的基于GLR的泊松(Poisson)过程变点(change point)识别模型,仿真实验给出了此模型对于变点识别的性能和可靠度,在假设过程中仅存在一个变点的条件下,可以同时得到过程受控数据集用以估计过程参数.%Based on the generalized likelihood ratio (GLR) method, a GLR-based model for identifying the change point in Poisson processes is proposed with unknown parameter. The simulation experiment gives the reliability and performance of this model for the change point identification. Under the assumption of there is only one change point in the process, the in-control dataset can be obtained and used for parameter estimation.

  16. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2012-01-01

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters......likelihood estimators. To this end we prove weak convergence of the conditional likelihood as a continuous stochastic...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...

  17. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values Xº-n, n = 0, 1, ..., under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to ?find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II...

  18. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values X0-n, n = 0, 1,...,under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d-b; where d ≥ b > 1/2 are parameters to be estimated. We model the data X1,...,XT given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II....

  19. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi- tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some mild conditions. When the background driving Lévy process is of type A or B, we show that the intensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  20. cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    CERN Document Server

    Ishida, E E O; Penna-Lima, M; Cisewski, J; de Souza, R S; Trindade, A M M; Cameron, E

    2015-01-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present cosmoabc, a Python ABC sampler featuring a Population Monte Carlo (PMC) variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled cosmoabc with the numcosmo library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. cosmoabc is published under the GPLv3 license on PyPI and GitHub and documentation is availabl...

  1. Workshop on Likelihoods for the LHC Searches

    CERN Document Server

    2013-01-01

    The primary goal of this 3‐day workshop is to educate the LHC community about the scientific utility of likelihoods. We shall do so by describing and discussing several real‐world examples of the use of likelihoods, including a one‐day in‐depth examination of likelihoods in the Higgs boson studies by ATLAS and CMS.

  2. Estimating dynamic equilibrium economies: linear versus nonlinear likelihood

    OpenAIRE

    2004-01-01

    This paper compares two methods for undertaking likelihood-based inference in dynamic equilibrium economies: a sequential Monte Carlo filter proposed by Fernández-Villaverde and Rubio-Ramírez (2004) and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. The authors report two main results...

  3. Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization

    OpenAIRE

    Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue

    2010-01-01

    This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...

  4. 基于广义似然比的小波域 SAR 图像相干斑抑制算法%Generalized Likelihood Ratio Based SAR Image Speckle Suppression Algorithm in Wavelet Domain

    Institute of Scientific and Technical Information of China (English)

    侯建华; 刘欣达; 陈稳; 陈少波

    2015-01-01

    A Bayes shrinkage formula is derived under the framework of joint detection and estimation theory, and a wavelet SAR image despeckling algorithm is realized based on generalized likelihood ratio.Firstly, redundant wavelet transform is performed directly to the original speckled SAR images, and binary mask is obtained for each wavelet coefficient.We use scale exponential distribution and Gamma distribution, respectively, to model the likelihood conditional probability of speckle noise and useful signal.According to the mask, the parameters of the two modes are estimated by maximum likelihood estimation method, and thus the likelihood conditional probability ratio is calculated.Experiment results show that the proposed method can effectively filter the speckle noise, and at the same time preserve the image details as possible.Satisfactory results are achieved on both synthetically speckled images and real SAR images.%在联合检测与估计理论框架下推导出了Bayes萎缩函数表达式,并提出了一种基于广义似然比的小波域SAR图像去斑算法。该算法对含斑SAR图像直接做冗余小波变换,求出小波系数所对应的二值掩模;对相干斑噪声和有用信号的似然条件概率分别建模为尺度指数分布和Gamma分布,根据二值掩模信息,采用最大似然估计得到两种模型的参数并计算似然条件概率比。实验结果表明:文中所给算法在有效滤除斑点噪声的同时,也较好地保持了图像的细节信息,在对人工加斑图像和多幅实际SAR图像的处理中获得了令人满意的结果。

  5. An Adaptive UKF Algorithm Based on Maximum Likelihood Principle and Expectation Maximization Algorithm%基于极大似然准则和最大期望算法的自适应UKF算法

    Institute of Scientific and Technical Information of China (English)

    王璐; 李光春; 乔相伟; 王兆龙; 马涛

    2012-01-01

    In order to solve the state estimation problem of nonlinear systems without knowing prior noise statistical characteristics, an adaptive unscented Kalman filter (UKF) based on the maximum likelihood principle and expectation maximization algorithm is proposed in this paper. In our algorithm, the maximum likelihood principle is used to find a log likelihood function with noise statistical characteristics. Then, the problem of noise estimation turns out to be maximizing the mean of the log likelihood function, which can be achieved by using the expectation maximization algorithm. Finally, the adaptive UKF algorithm with a suboptimal and recurred noise statistical estimator can be obtained. The simulation analysis shows that the proposed adaptive UKF algorithm can overcome the problem of filtering accuracy declination of traditional UKF used in nonlinear filtering without knowing prior noise statistical characteristics and that the algorithm can estimate the noise statistical parameters online.%针对噪声先验统计特性未知情况下的非线性系统状态估计问题,提出了基于极大似然准则和最大期望算法的自适应无迹卡尔曼滤波(Unscented Kalman filter,UKF)算法.利用极大似然准则构造含有噪声统计特性的对数似然函数,通过最大期望算法将噪声估计问题转化为对数似然函数数学期望极大化问题,最终得到带次优递推噪声统计估计器的自适应UKF算法.仿真分析表明,与传统UKF算法相比,提出的自适应UKF算法有效克服了传统UKF算法在系统噪声统计特性未知情况下滤波精度下降的问题,并实现了系统噪声统计特性的在线估计.

  6. Chinese Word Segmentation Cognitive Model Based on Maximum Likelihood Optimization EM Algorithm%极大似然优化EM算法的汉语分词认知模型

    Institute of Scientific and Technical Information of China (English)

    赵越; 李红

    2016-01-01

    针对标准EM算法在汉语分词的应用中还存在收敛性能不好、分词准确性不高的问题,本文提出了一种基于极大似然估计规则优化EM算法的汉语分词认知模型,首先使用当前词的概率值计算每个可能切分的可能性,对切分可能性进行“归一化”处理,并对每种切分进行词计数,然后针对标准EM算法得到的估计值只能保证收敛到似然函数的一个稳定点,并不能使其保证收敛到全局最大值点或者局部最大值点的问题,采用极大似然估计规则对其进行优化,从而可以使用非线性最优化中的有效方法进行求解达到加速收敛的目的。仿真试验结果表明,本文提出的基于极大似然估计规则优化EM算法的汉语分词认知模型收敛性能更好,且在汉语分词的精确性较高。%In view of bad convergence and inaccurate word segmentation of standard EM algorithm in Chinese words segmentation, this paper put forward a cognitive model based on optimized EM algorithm by maximum likelihood estimation rule. Firstly, it uses the probability of current word to calculate the possibility of each possible segmentation and normalize them. Each segmentation is counted by words. Standard EM algorithm cannot make sure converging to a stable point of likelihood function, and converging to a global or local maximum point. Therefore, the maximum likelihood estimation rule is adopted to optimize it so as to use an effective method in nonlinear optimization and accelerate the convergence. the simulation experiments show that the optimized EM algorithm by maximum likelihood estimation rule has better convergence performance in the Chinese words cognitive model and more accurate in the words segmentation.

  7. CORA - emission line fitting with Maximum Likelihood

    Science.gov (United States)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  8. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  9. Estimation for Non-Gaussian Locally Stationary Processes with Empirical Likelihood Method

    Directory of Open Access Journals (Sweden)

    Hiroaki Ogata

    2012-01-01

    Full Text Available An application of the empirical likelihood method to non-Gaussian locally stationary processes is presented. Based on the central limit theorem for locally stationary processes, we give the asymptotic distributions of the maximum empirical likelihood estimator and the empirical likelihood ratio statistics, respectively. It is shown that the empirical likelihood method enables us to make inferences on various important indices in a time series analysis. Furthermore, we give a numerical study and investigate a finite sample property.

  10. Employee Likelihood of Purchasing Health Insurance using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2012-01-01

    Full Text Available Many believe that employees health and economic factors plays an important role in their likelihood to purchase health insurance. However decision to purchase health insurance is not trivial matters as many risk factors that influence decision. This paper presents a decision model using fuzzy inference system to identify the likelihoods of purchasing health insurance based on the selected risk factors. To build the likelihoods, data from one hundred and twenty eight employees at five organizations under the purview of Kota Star Municipality Malaysia were collected to provide input data. Three risk factors were considered as the input of the system including age, salary and risk of having illness. The likelihoods of purchasing health insurance was the output of the system and defined in three linguistic terms of Low, Medium and High. Input and output data were governed by the Mamdani inference rules of the system to decide the best linguistic term. The linguistic terms that describe the likelihoods of purchasing health insurance were identified by the system based on the three risk factors. It is found that twenty seven employees were likely to purchase health insurance at Low level and fifty six employees show their likelihoods at High level. The usage of fuzzy inference system would offer possible justifications to set a new approach in identifying prospective health insurance purchasers.

  11. A Group-Period Phase Comparison Method Based on Equivalent Phase Comparison Frequency

    Institute of Scientific and Technical Information of China (English)

    DU Bao-Qiang; ZHOU Wei; DONG Shao-Feng; ZHOU Hai-Niu

    2009-01-01

    Based on the principle of equivalent phase comparison frequency, we propose a group-period phase comparison method. This method can be used to reveal the inherent relations between periodic signals and the change laws of the phase difference. If these laws are applied in the processing of the mutual relations between frequency signals, phase comparison can be accomplished without frequency normalization. Experimental results show that the method can enhance the measurement resolution to 10-13/s in the time domain.

  12. MLDS: Maximum Likelihood Difference Scaling in R

    Directory of Open Access Journals (Sweden)

    Kenneth Knoblauch

    2008-01-01

    Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.

  13. Generalized Likelihood Ratio Statistics and Uncertainty Adjustments in Efficient Adaptive Design of Clinical Trials

    CERN Document Server

    Bartroff, Jay

    2011-01-01

    A new approach to adaptive design of clinical trials is proposed in a general multiparameter exponential family setting, based on generalized likelihood ratio statistics and optimal sequential testing theory. These designs are easy to implement, maintain the prescribed Type I error probability, and are asymptotically efficient. Practical issues involved in clinical trials allowing mid-course adaptation and the large literature on this subject are discussed, and comparisons between the proposed and existing designs are presented in extensive simulation studies of their finite-sample performance, measured in terms of the expected sample size and power functions.

  14. 对数似然图像分割的快速主动轮廓跟踪算法%Fast active contour tracking algorithm based on log-likelihood image segmentation

    Institute of Scientific and Technical Information of China (English)

    杨华; 陈善静; 曾凯; 张红

    2012-01-01

    针对跟踪目标尺度变化问题,提出了基于灰度对数似然图像分割的快速主动轮廓跟踪算法.改进的主动轮廓跟踪算法将根据以目标与背景的颜色差异而建立的对数似然图对图像进行阈值分割和数学形态学处理,再将Kalman滤波器结合到主动轮廓跟踪算法进行目标跟踪.改进的主动轮廓跟踪算法对目标分割准确,轮廓特征显著,跟踪效果稳定,算法能很好地适应跟踪目标尺度变化.通过Kalman滤波器对目标位置点的预测减少了主动轮廓跟踪算法收敛的迭代次数,使算法的运算效率提高了33%左右.%A fast active contour tracking(ACT) algorithm based on log-likelihood image segmentation has been proposed to solve the scale change problem in the process of target tracking. The algorithm adopts the log-likelihood image segmentation method, which segments images according to their log-likelihood images built based on the color difference between target and background, and the mathematical morphology method, and tracks the target with conventional ACT algorithm combined with Kalman filter. It tracks the target precisely with distinct contour features and stable tracking performance, and can well adapt to the target scale change. The Kalman filter adopted reduces the number of iterations for algorithm convergence through its forecast of the target position, and thus the fast ACT algorithm is about 33% more efficient than the conventional one.

  15. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  16. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  17. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  18. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  19. Parameter likelihood of intrinsic ellipticity correlations

    CERN Document Server

    Capranico, Federica; Schaefer, Bjoern Malte

    2012-01-01

    Subject of this paper are the statistical properties of ellipticity alignments between galaxies evoked by their coupled angular momenta. Starting from physical angular momentum models, we bridge the gap towards ellipticity correlations, ellipticity spectra and derived quantities such as aperture moments, comparing the intrinsic signals with those generated by gravitational lensing, with the projected galaxy sample of EUCLID in mind. We investigate the dependence of intrinsic ellipticity correlations on cosmological parameters and show that intrinsic ellipticity correlations give rise to non-Gaussian likelihoods as a result of nonlinear functional dependencies. Comparing intrinsic ellipticity spectra to weak lensing spectra we quantify the magnitude of their contaminating effect on the estimation of cosmological parameters and find that biases on dark energy parameters are very small in an angular-momentum based model in contrast to the linear alignment model commonly used. Finally, we quantify whether intrins...

  20. Epoch-based likelihood models reveal no evidence for accelerated evolution of viviparity in squamate reptiles in response to cenozoic climate change.

    Science.gov (United States)

    King, Benedict; Lee, Michael S Y

    2015-09-01

    A broad scale analysis of the evolution of viviparity across nearly 4,000 species of squamates revealed that origins increase in frequency toward the present, raising the question of whether rates of change have accelerated. We here use simulations to show that the increased frequency is within the range expected given that the number of squamate lineages also increases with time. Novel, epoch-based methods implemented in BEAST (which allow rates of discrete character evolution to vary across time-slices) also give congruent results, with recent epochs having very similar rates to older epochs. Thus, contrary to expectations, there was no accelerated burst of origins of viviparity in response to global cooling during the Cenozoic or glacial cycles during the Plio-Pleistocene. However, if one accepts the conventional view that viviparity is more likely to evolve than to be lost, and also the evidence here that viviparity has evolved with similar regularity throughout the last 200 Ma, then the absence of large, ancient clades of viviparous squamates (analogs to therian mammals) requires explanation. Viviparous squamate lineages might be more prone to extinction than are oviparous lineages, due to their prevalance at high elevations and latitudes and thus greater susceptibility to climate fluctuations. If so, the directional bias in character evolution would be offset by the bias in extinction rates.

  1. Systematic Identification of Two-compartment Model based on the Maximum Likelihood Method%基于极大似然法的二房室模型系统辨识

    Institute of Scientific and Technical Information of China (English)

    张应云; 张榆锋; 王勇; 李敬敬; 施心陵

    2014-01-01

    A approach according to the Maximum Likelihood method was presented in this paper to identify the parameters of the Two-compartment Model.To verify the performance of this method, the estimation parameters of the Two-compartment Model ob-tained from it and their absolute errors were compared with those obtained from the methods based on recursive augmented least -squares algorithm.It could be seen that the accuracy and feasibility of the identification-parameters of the nonlinear two-compart-ment model received by Maximum Likelihood method were obviously better than those from the recursive augmented least-squares method.So those parameters with smaller deviations can be used in correlative clinical trial to improve the practicability of the nonlinear two-compartment model.%提出一种基于极大似然法的二房室模型参数辨识方法。为验证本方法的有效性,我们比较了基于极大似然法和递推增广最小二乘法估计得到的常用二房室模型的参数值及其绝对误差。结果表明,基于极大似然法的非线性二房室模型参数辨识准确性和可行性明显优于递推增广最小二乘法。通过极大似然法获得的较小误差的非线性二房室模型参数估计值可用于相关临床试验,有助于提高建立非线性二房室模型的实用性。

  2. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...

  3. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß...

  4. Essays in Likelihood-Based Computational Econometrics

    NARCIS (Netherlands)

    T. Salimans (Tim)

    2013-01-01

    textabstractThe theory of probabilities is basically only common sense reduced to a calculus. Pierre Simon Laplace, 1812 The quote above is from Pierre Simon Laplace’s introduction to his seminal work Th´eorie analytique des probabilit´es, in which he lays the groundwork for what is currently known

  5. What Is the Best Method to Fit Time-Resolved Data? A Comparison of the Residual Minimization and the Maximum Likelihood Techniques As Applied to Experimental Time-Correlated, Single-Photon Counting Data.

    Science.gov (United States)

    Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W

    2016-03-10

    The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.

  6. MDA based-approach for UML Models Complete Comparison

    CERN Document Server

    Chaouni, Samia Benabdellah; Mouline, Salma

    2011-01-01

    If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.

  7. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    . The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock......We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...

  8. Semiparametric maximum likelihood for nonlinear regression with measurement errors.

    Science.gov (United States)

    Suh, Eun-Young; Schafer, Daniel W

    2002-06-01

    This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.

  9. Approximated maximum likelihood estimation in multifractal random walks

    CERN Document Server

    Løvsletten, Ola

    2011-01-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry et al., Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  10. Factors Influencing the Intended Likelihood of Exposing Sexual Infidelity.

    Science.gov (United States)

    Kruger, Daniel J; Fisher, Maryanne L; Fitzgerald, Carey J

    2015-08-01

    There is a considerable body of literature on infidelity within romantic relationships. However, there is a gap in the scientific literature on factors influencing the likelihood of uninvolved individuals exposing sexual infidelity. Therefore, we devised an exploratory study examining a wide range of potentially relevant factors. Based in part on evolutionary theory, we anticipated nine potential domains or types of influences on the likelihoods of exposing or protecting cheaters, including kinship, strong social alliances, financial support, previous relationship behaviors (including infidelity and abuse), potential relationship transitions, stronger sexual and emotional aspects of the extra-pair relationship, and disease risk. The pattern of results supported these predictions (N = 159 men, 328 women). In addition, there appeared to be a small positive bias for participants to report infidelity when provided with any additional information about the situation. Overall, this study contributes a broad initial description of factors influencing the predicted likelihood of exposing sexual infidelity and encourages further studies in this area.

  11. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  12. IMPROVING VOICE ACTIVITY DETECTION VIA WEIGHTING LIKELIHOOD AND DIMENSION REDUCTION

    Institute of Scientific and Technical Information of China (English)

    Wang Huanliang; Han Jiqing; Li Haifeng; Zheng Tieran

    2008-01-01

    The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD.Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little.Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.

  13. Profile likelihood maps of a 15-dimensional MSSM

    NARCIS (Netherlands)

    Strege, C.; Bertone, G.; Besjes, G.J.; Caron, S.; Ruiz de Austri, R.; Strubig, A.; Trotta, R.

    2014-01-01

    We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter

  14. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  15. A new phase comparison pilot protection based on wavelet transform

    Institute of Scientific and Technical Information of China (English)

    YANG Ying; TAI Neng-ling; YU Wei-yong

    2006-01-01

    Current phase comparison based pilot protection had been generally utilized as primary protection of the transmission lines in China from the 1950's to the 1980's. Conventional phase comparison pilot protection has a long phase comparison time, which results in a longer fault-clearing time. This paper proposes a new current phase comparison. pilot protection scheme that is based on non-power frequency fault current component.The phase of the fourth harmonic current of each end of the protected line has been abstracted by utilizing complex wavelet transformation and then compared in order to determine whether the inner fault occurs or not. This way can greatly decrease fault-clearing time and improve performances of this pilot protection when fault occurs under the heavy-load current and asymmetrical operation conditions. Many EMTP simulations have verified theproposed scheme's correctness and effectiveness.

  16. 基于最大似然估计和DSP技术的相位补偿算法%Phase Compensation Algorithm Based on Maximum Likelihood Estimation and DSP Technology

    Institute of Scientific and Technical Information of China (English)

    铁维昊; 王文利; 路灿

    2012-01-01

    针对低压电力线非线性信道特性及不同步因素对于载波信号相位的影响,笔者提出了一种基于最大似然估计的相位补偿算法.首先分析了同步问题对于误码率的影响.其次,介绍了相位补偿算法的原理及关键技术的分析.最后在DSP中实现了相位补偿算法的程序设计,并在实际的低压信道中进行测试.测试结果表明:该算法可以有效地解决数字通信中信道的非线性引起的信号失真问题.%To investigate the influence of nonlinearity and asynchronization of low-voltage power line channel on phase of carrier signal,a phase compensation algorithm based on maximum likelihood estimation is proposed. First the influence of synchronization on BER is analyzed, then the principle of the phase compensation algorithm and key technology are introduced. Finally,program of the phase compensation algorithm is implemented in DSP, and is tested in actual low-voltage channel. The result shows that the proposed algorithm can eliminate signal distortion caused by nonlinearity of channel in digital communication.

  17. Introductory statistical inference with the likelihood function

    CERN Document Server

    Rohde, Charles A

    2014-01-01

    This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians.  After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with secti...

  18. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  19. Maximum-likelihood cluster recontruction

    CERN Document Server

    Bartelmann, M; Seitz, S; Schneider, P J; Bartelmann, Matthias; Narayan, Ramesh; Seitz, Stella; Schneider, Peter

    1996-01-01

    We present a novel method to recontruct the mass distribution of galaxy clusters from their gravitational lens effect on background galaxies. The method is based on a least-chisquare fit of the two-dimensional gravitational cluster potential. The method combines information from shear and magnification by the cluster lens and is designed to easily incorporate possible additional information. We describe the technique and demonstrate its feasibility with simulated data. Both the cluster morphology and the total cluster mass are well reproduced.

  20. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  1. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  2. Conditional Likelihood Estimators for Hidden Markov Models and Stochastic Volatility Models

    OpenAIRE

    Genon-Catalot, Valentine; Jeantheau, Thierry; Laredo, Catherine

    2003-01-01

    ABSTRACT. This paper develops a new contrast process for parametric inference of general hidden Markov models, when the hidden chain has a non-compact state space. This contrast is based on the conditional likelihood approach, often used for ARCH-type models. We prove the strong consistency of the conditional likelihood estimators under appropriate conditions. The method is applied to the Kalman filter (for which this contrast and the exact likelihood lead to asymptotically equivalent estimat...

  3. Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes

    DEFF Research Database (Denmark)

    Zimmermann, Ralf

    2010-01-01

    The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood......: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training...

  4. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  5. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Martínez Santos, D.; Olive, K.A.; Richards, A.; de Vries, K.J.; Weiglein, G.

    2016-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  6. Likelihood Analysis of Supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY; Costa, J. C. [Imperial Coll., London; Sakurai, K. [Warsaw U.; Borsato, M. [Santiago de Compostela U.; Buchmueller, O. [Imperial Coll., London; Cavanaugh, R. [Illinois U., Chicago; Chobanova, V. [Santiago de Compostela U.; Citron, M. [Imperial Coll., London; De Roeck, A. [Antwerp U.; Dolan, M. J. [Melbourne U.; Ellis, J. R. [King' s Coll. London; Flächer, H. [Bristol U.; Heinemeyer, S. [Madrid, IFT; Isidori, G. [Zurich U.; Lucio, M. [Santiago de Compostela U.; Martínez Santos, D. [Santiago de Compostela U.; Olive, K. A. [Minnesota U., Theor. Phys. Inst.; Richards, A. [Imperial Coll., London; de Vries, K. J. [Imperial Coll., London; Weiglein, G. [DESY

    2016-10-31

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel ${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ${\\tilde \

  7. Maximum Likelihood TOA Estimation Algorithm Based on Multi-carrier Time-frequency Iteration%基于多载波时频迭代的最大似然TOA估计算法

    Institute of Scientific and Technical Information of China (English)

    程刘胜

    2015-01-01

    在合理布局井下无线网络基站的基础上,提出了一种基于多载波时频迭代的最大似然TOA(Time of Arrival)估计算法,通过将小数延时不断迭代来缩小估计误差,确定合适搜索步长,实现对信号的精确TOA估计。仿真结果表明:时频迭代的最大似然TOA估计算法具有更快的收敛速度;在信噪比较小时,采用时频迭代的最大似然TOA估计算法比经典TOA估计算法有效地提高了估计精度。%The influence of underground multipath, non-line of sight and the network time synchronization accuracy cause that delayed arrival time estimation deviation is bigger in the mining UWB high accuracy position system. This paper proposes a maximum likelihood TOA estimation algorithm based on multi-carrier time-frequency iteration by rationally distributing the underground wireless base stations to conform a suitable searching step length and find the exact TOA approximation estimation to the signal via fractional delay iterated to narrow the estimation error. The result shows that the time frequency iteration TOA estimation has a faster rate of convergence than the non-iteration algorithm.

  8. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    Science.gov (United States)

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  9. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    DEFF Research Database (Denmark)

    Bohlin, Jon; Snipen, Lars; Cloeckaert, Axel

    2010-01-01

    , genomic codon and amino acid frequencies based comparisons) and proteomes (all-against-all BLAST protein comparisons and pan-genomic analyses). RESULTS: We found that the oligonucleotide based methods gave different results compared to that of the proteome based methods. Differences were also found...... than proteome comparisons between species in genus Brucella and genus Ochrobactrum. Pan-genomic analyses indicated that uptake of DNA from outside genus Brucella appears to be limited. CONCLUSIONS: While both the proteome based methods and the Markov chain based genomic signatures were able to reflect...

  10. CMB likelihood approximation by a Gaussianized Blackwell-Rao estimator

    CERN Document Server

    Rudjord, Ø; Eriksen, H K; Huey, Greg; Górski, K M; Jewell, J B

    2008-01-01

    We introduce a new CMB temperature likelihood approximation called the Gaussianized Blackwell-Rao (GBR) estimator. This estimator is derived by transforming the observed marginal power spectrum distributions obtained by the CMB Gibbs sampler into standard univariate Gaussians, and then approximate their joint transformed distribution by a multivariate Gaussian. The method is exact for full-sky coverage and uniform noise, and an excellent approximation for sky cuts and scanning patterns relevant for modern satellite experiments such as WMAP and Planck. A single evaluation of this estimator between l=2 and 200 takes ~0.2 CPU milliseconds, while for comparison, a single pixel space likelihood evaluation between l=2 and 30 for a map with ~2500 pixels requires ~20 seconds. We apply this tool to the 5-year WMAP temperature data, and re-estimate the angular temperature power spectrum, $C_{\\ell}$, and likelihood, L(C_l), for l<=200, and derive new cosmological parameters for the standard six-parameter LambdaCDM mo...

  11. Clock comparison based on laser ranging technologies

    Science.gov (United States)

    Samain, Etienne

    2015-06-01

    Recent progress in the domain of time and frequency standards has required some important improvements of existing time transfer links. Several time transfer by laser link (T2L2) projects have been carried out since 1972 with numerous scientific or technological objectives. There are two projects currently under exploitation: T2L2 and Lunar Reconnaissance Orbiter (LRO). The former is a dedicated two-way time transfer experiment embedded on the satellite Jason-2 allowing for the synchronization of remote clocks with an uncertainty of 100 ps and the latter is a one-way link devoted for ranging a spacecraft orbiting around the Moon. There is also the Laser Time Transfer (LTT) project, exploited until 2012 and designed in the frame of the Chinese navigation constellation. In the context of future space missions for fundamental physics, solar system science or navigation, laser links are of prime importance and many missions based on that technology have been proposed for these purposes.

  12. Price Comparisons on the Internet Based on Computational Intelligence

    Science.gov (United States)

    Kim, Jun Woo; Ha, Sung Ho

    2014-01-01

    Information-intensive Web services such as price comparison sites have recently been gaining popularity. However, most users including novice shoppers have difficulty in browsing such sites because of the massive amount of information gathered and the uncertainty surrounding Web environments. Even conventional price comparison sites face various problems, which suggests the necessity of a new approach to address these problems. Therefore, for this study, an intelligent product search system was developed that enables price comparisons for online shoppers in a more effective manner. In particular, the developed system adopts linguistic price ratings based on fuzzy logic to accommodate user-defined price ranges, and personalizes product recommendations based on linguistic product clusters, which help online shoppers find desired items in a convenient manner. PMID:25268901

  13. Price comparisons on the internet based on computational intelligence.

    Directory of Open Access Journals (Sweden)

    Jun Woo Kim

    Full Text Available Information-intensive Web services such as price comparison sites have recently been gaining popularity. However, most users including novice shoppers have difficulty in browsing such sites because of the massive amount of information gathered and the uncertainty surrounding Web environments. Even conventional price comparison sites face various problems, which suggests the necessity of a new approach to address these problems. Therefore, for this study, an intelligent product search system was developed that enables price comparisons for online shoppers in a more effective manner. In particular, the developed system adopts linguistic price ratings based on fuzzy logic to accommodate user-defined price ranges, and personalizes product recommendations based on linguistic product clusters, which help online shoppers find desired items in a convenient manner.

  14. Precise Estimation of Cosmological Parameters Using a More Accurate Likelihood Function

    Science.gov (United States)

    Sato, Masanori; Ichiki, Kiyotomo; Takeuchi, Tsutomu T.

    2010-12-01

    The estimation of cosmological parameters from a given data set requires a construction of a likelihood function which, in general, has a complicated functional form. We adopt a Gaussian copula and constructed a copula likelihood function for the convergence power spectrum from a weak lensing survey. We show that the parameter estimation based on the Gaussian likelihood erroneously introduces a systematic shift in the confidence region, in particular, for a parameter of the dark energy equation of state w. Thus, the copula likelihood should be used in future cosmological observations.

  15. 基于广义似然比法的化工非线性动态过程过失误差侦破%Gross Errors Detection for Nonlinear Dynamic Chemical Process Based on Generalized Likelihood Ratios

    Institute of Scientific and Technical Information of China (English)

    王莉; 金思毅; 黄兆杰

    2013-01-01

    广义似然比法(GLR)是一种有效适用于线性稳态化工过程的过失误差侦破方法.通过将动态化工数据协调模型中的微分约束和代数约束转化为矩阵形式和非线性约束线性化方法,成功将GLR应用到连续搅拌釜(CSTR)非线性动态系统中,同时计算了GLR在该系统中的过失误差侦破性能.统计结果表明,GLR的过失误差侦破率与过失误差大小和窗口长度有关:侦破率随过失误差增大而增大,随窗口长度增大而增大.%Generalized likelihood ratios (GLR) is an effective gross errors detection method for linear steady data reconciliation.In the paper,the differential constraints and algebraic constraints of dynamic data reconciliation model were transformed into the form of matrix,and the nonlinear constraints were linearized.Based on the two methods,GLR was successfully applied to a continuous stirred tank reactor (CSTR) system.The performance of gross errors detection of GLR in the nonlinear dynamic system was also calculated.Statistic results show that gross error detection rate relates to the size of gross error and the length of moving window.With the increase of gross error,the detection rate is improved; with the increase of length of moving window,the detection rate is also improved.

  16. Community Level Disadvantage and the Likelihood of First Ischemic Stroke

    Directory of Open Access Journals (Sweden)

    Bernadette Boden-Albala

    2012-01-01

    Full Text Available Background and Purpose. Residing in “disadvantaged” communities may increase morbidity and mortality independent of individual social resources and biological factors. This study evaluates the impact of population-level disadvantage on incident ischemic stroke likelihood in a multiethnic urban population. Methods. A population based case-control study was conducted in an ethnically diverse community of New York. First ischemic stroke cases and community controls were enrolled and a stroke risk assessment performed. Data regarding population level economic indicators for each census tract was assembled using geocoding. Census variables were also grouped together to define a broader measure of collective disadvantage. We evaluated the likelihood of stroke for population-level variables controlling for individual social (education, social isolation, and insurance and vascular risk factors. Results. We age-, sex-, and race-ethnicity-matched 687 incident ischemic stroke cases to 1153 community controls. The mean age was 69 years: 60% women; 22% white, 28% black, and 50% Hispanic. After adjustment, the index of community level disadvantage (OR 2.0, 95% CI 1.7–2.1 was associated with increased stroke likelihood overall and among all three race-ethnic groups. Conclusion. Social inequalities measured by census tract data including indices of community disadvantage confer a significant likelihood of ischemic stroke independent of conventional risk factors.

  17. Maximum likelihood Jukes-Cantor triplets: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Hendy, Michael D; Snir, Sagi

    2006-03-01

    Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.

  18. Planck 2013 results. XV. CMB power spectra and likelihood

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...

  19. Bayesian experimental design for models with intractable likelihoods.

    Science.gov (United States)

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.

  20. Automated UMLS-based comparison of medical forms.

    Directory of Open Access Journals (Sweden)

    Martin Dugas

    Full Text Available Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far--to our knowledge--an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS. Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care. Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms.

  1. Automated UMLS-based comparison of medical forms.

    Science.gov (United States)

    Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard

    2013-01-01

    Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far--to our knowledge--an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms.

  2. Likelihood alarm displays. [for human operator

    Science.gov (United States)

    Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.

    1988-01-01

    In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.

  3. An Adaptive Algorithm for Pairwise Comparison-based Preference Measurement

    DEFF Research Database (Denmark)

    Meissner, Martin; Decker, Reinhold; Scholz, Sören W.

    2011-01-01

    The Pairwise Comparison‐based Preference Measurement (PCPM) approach has been proposed for products featuring a large number of attributes. In the PCPM framework, a static two‐cyclic design is used to reduce the number of pairwise comparisons. However, adaptive questioning routines that maximize...

  4. CORA: Emission Line Fitting with Maximum Likelihood

    Science.gov (United States)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.

  5. Comparison of Two Distance Based Alignment Method in Medical Imaging

    Science.gov (United States)

    2001-10-25

    very helpful to register large datasets of contours or surfaces, commonly encountered in medical imaging . They do not require special ordering or...COMPARISON OF TWO DISTANCE BASED ALIGNMENT METHOD IN MEDICAL IMAGING G. Bulan, C. Ozturk Institute of Biomedical Engineering, Bogazici University...Two Distance Based Alignment Method in Medical Imaging Contract Number Grant Number Program Element Number Author(s) Project Number Task Number

  6. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others

    2016-10-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.

  7. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)

    2017-02-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)

  8. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  9. Global self-weighted and local quasi-maximum exponential likelihood estimators for ARMA--GARCH/IGARCH models

    CERN Document Server

    Zhu, Ke; 10.1214/11-AOS895

    2012-01-01

    This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA--GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given.

  10. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2012-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally. The po

  11. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2015-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but as usual the asymptotic results do not require normally distr

  12. Fast inference in generalized linear models via expected log-likelihoods.

    Science.gov (United States)

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  13. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  14. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum like...

  15. Synthesizing Regression Results: A Factored Likelihood Method

    Science.gov (United States)

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  16. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  17. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  18. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  19. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  20. Weak GPS signal C/N0 estimation algorithm based on maximum likelihood method%基于最大似然法的GP S弱信号载噪比估计算法

    Institute of Scientific and Technical Information of China (English)

    文力; 谢跃雷; 纪元法; 孙希延

    2014-01-01

    为了在弱信号环境下准确估计卫星信号载噪比,提出一种可自适应调整估计时间,基于最大似然准则的载噪比估计算法。在分析 GPS信号相关器模型输出的基础上,对该算法的原理和性能进行了理论分析,研究了相干累加次数对该算法的影响,并在仿真平台上进行验证。仿真结果与理论推导吻合,在信号很弱时可通过提高累加次数对载噪比进行准确估计。相对传统载噪比估计算法,该算法估计时间较短,估值准确。根据理论推导求出满足精度要求的最小累加次数,用于自适应调整估计更新时间,可提高算法的灵活性。%In order to estimate the carrier to noise ratio under the weak signal environment,an algorithm based on the maxi-mum likelihood criterion has been proposed which can change the update time adaptively.On the basis of GPS correlator output model,the algorithm performance is analyzed theoretically,the coherent accumulation times impact on the accuracy of the estimation.The simulation results agree with the theoretical derivation,which verify that the accuracy can be assured by increasing accumulation times under the noise environment.Compared with the traditional carrier to noise ratio estima-tion algorithm,the method consumes shorter time with good accuracy.Also the minimum cumulative number to meet accu-racy requirements can increase the flexibility of the algorithm by adj usting estimation update time adaptively.

  1. RELM (the Working Group for the Development of Region Earthquake Likelihood Models) and the Development of new, Open-Source, Java-Based (Object Oriented) Code for Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Field, E. H.

    2001-12-01

    Given problems with virtually all previous earthquake-forecast models for southern California, and a current lack of consensus on how such models should be constructed, a joint SCEC-USGS sponsored working group for the development of Regional Earthquake Likelihood Models (RELM) has been established (www.relm.org). The goals are as follows: 1) To develop and test a range of viable earthquake-potential models for southern California (not just one "consensus" model); 2) To examine and compare the implications of each model with respect to probabilistic seismic-hazard estimates (which will not only quantify existing hazard uncertainties, but will also indicate how future research should be focused in order to reduce the uncertainties); and 3) To design and document conclusive tests of each model with respect to existing and future geophysical observations. The variety of models under development reflects the variety of geophysical constraints available; these include geological fault information, historical seismicity, geodetic observations, stress-transfer interactions, and foreshock/aftershock statistics. One reason for developing and testing a range of models is to evaluate the extent to which any one can be exported to another region where the options are more limited. RELM is not intended to be a one-time effort. Rather, we are building an infrastructure that will facilitate an ongoing incorporation of new scientific findings into seismic-hazard models. The effort involves the development of several community models and databases, one of which is new Java-based code for probabilistic seismic hazard analysis (PSHA). Although several different PSHA codes presently exist, none are open source, well documented, and written in an object-oriented programming language (which is ideally suited for PSHA). Furthermore, we need code that is flexible enough to accommodate the wide range of models currently under development in RELM. The new code is being developed under

  2. Comparison of metatranscriptomic samples based on k-tuple frequencies.

    Directory of Open Access Journals (Sweden)

    Ying Wang

    Full Text Available BACKGROUND: The comparison of samples, or beta diversity, is one of the essential problems in ecological studies. Next generation sequencing (NGS technologies make it possible to obtain large amounts of metagenomic and metatranscriptomic short read sequences across many microbial communities. De novo assembly of the short reads can be especially challenging because the number of genomes and their sequences are generally unknown and the coverage of each genome can be very low, where the traditional alignment-based sequence comparison methods cannot be used. Alignment-free approaches based on k-tuple frequencies, on the other hand, have yielded promising results for the comparison of metagenomic samples. However, it is not known if these approaches can be used for the comparison of metatranscriptome datasets and which dissimilarity measures perform the best. RESULTS: We applied several beta diversity measures based on k-tuple frequencies to real metatranscriptomic datasets from pyrosequencing 454 and Illumina sequencing platforms to evaluate their effectiveness for the clustering of metatranscriptomic samples, including three d2-type dissimilarity measures, one dissimilarity measure in CVTree, one relative entropy based measure S2 and three classical 1p-norm distances. Results showed that the measure d2(S can achieve superior performance on clustering metatranscriptomic samples into different groups under different sequencing depths for both 454 and Illumina datasets, recovering environmental gradients affecting microbial samples, classifying coexisting metagenomic and metatranscriptomic datasets, and being robust to sequencing errors. We also investigated the effects of tuple size and order of the background Markov model. A software pipeline to implement all the steps of analysis is built and is available at http://code.google.com/p/d2-tools/. CONCLUSIONS: The k-tuple based sequence signature measures can effectively reveal major groups and

  3. Maximum Likelihood Estimation Based Algorithm for Tracking Cooperative Target%一种基于最大似然估计的合作目标多维参数跟踪算法

    Institute of Scientific and Technical Information of China (English)

    魏子翔; 崔嵬; 李霖; 吴爽; 吴嗣亮

    2015-01-01

    The scheme which is based on the Digital Delay Locked Loop (DDLL), Frequency Locked Loop (FLL), and Phase Locked Loop (PLL) is implemented in the microwave radar for spatial rendezvous and docking, and the delay, frequency and Direction Of Arrival (DOA) estimations of the incident direct-sequence spread spectrum signal transmitted by cooperative target are obtained. Yet the DDLL, FLL, and PLL (DFP) based scheme has not made full use of the received signal. For this reason, a novel Maximum Likelihood Estimation (MLE) Based Tracking (MLBT) algorithm with a low computational burden is proposed. The feature that the gradients of cost function are proportional to parameter errors is employed to design discriminators of parameter errors. Then three tracking loops are set up to provide the parameter estimations. In the following section, the variance characteristics of discriminators are investigated, and the low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are given for the MLBT algorithm. Finally, the simulations and computational efficiency analysis are provided. The low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are verified. Additionally, it is also shown that the MLBT algorithm achieves better performances in terms of estimators accuracy than those of the DFP based scheme with a limited increase in computational burden.%空间交会对接微波雷达采用基于延迟锁定环(DDLL)、锁频环(FLL)和锁相环(PLL)的算法处理合作目标转发的直接序列扩频信号,获得入射信号的时延、频率及波达角(DOA)估计.针对当前基于DDLL, FLL和PLL(DFP)的算法没有充分利用接收信号有效信息的问题,该文提出一种基于极大似然估计(MLE)的低代价闭环跟踪(MLBT)算法.该算法利用代价函数的梯度正比于参数误差的特性,设计了参数误差鉴别器.在此基础上给出了相应的扩频信号多参数跟踪环路.分析并验证了鉴别器的方差特性,

  4. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  5. $\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs

    CERN Document Server

    van de Geer, Sara

    2012-01-01

    We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.

  6. A model independent safeguard for unbinned Profile Likelihood

    CERN Document Server

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny

    2016-01-01

    We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models. We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.

  7. Maximum likelihood method and Fisher's information in physics and econophysics

    CERN Document Server

    Syska, Jacek

    2012-01-01

    Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...

  8. Quantum Private Comparison Protocol Based on Bell Entangled States

    Institute of Scientific and Technical Information of China (English)

    刘文; 王永滨; 崔巍

    2012-01-01

    In this paper,a quantum private comparison protocol is proposed based on bell entangled states.In our protocol,two parties can compare the equality of their information with the help of a semi-honest third party.The correctness and security of our protocol are discussed.One party cannot learn the other's private information and the third party also cannot learn any information about the private information.

  9. Fertilization response likelihood for the interpretation of leaf analyses

    Directory of Open Access Journals (Sweden)

    Celsemy Eleutério Maia

    2012-04-01

    Full Text Available Leaf analysis is the chemical evaluation of the nutritional status where the nutrient concentrations found in the tissue reflect the nutritional status of the plants. Thus, a correct interpretation of the results of leaf analysis is fundamental for an effective use of this tool. The purpose of this study was to propose and compare the method of Fertilization Response Likelihood (FRL for interpretation of leaf analysis with that of the Diagnosis and Recommendation Integrated System (DRIS. The database consisted of 157 analyses of the N, P, K, Ca, Mg, S, Cu, Fe, Mn, Zn, and B concentrations in coffee leaves, which were divided into two groups: low yield ( 30 bags ha-1. The DRIS indices were calculated using the method proposed by Jones (1981. The fertilization response likelihood was computed based on the approximation of normal distribution. It was found that the Fertilization Response Likelihood (FRL allowed an evaluation of the nutritional status of coffee trees, coinciding with the DRIS-based diagnoses in 84.96 % of the crops.

  10. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order d-b; that is, there exist vectors β for which β......′X_{t} is fractional of order d-b. The parameters d and b satisfy either d≥b≥1/2, d=b≥1/2, or d=d_{0}≥b≥1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2≤b≤d≤d_{1} for any d_{1}≥d_{0}. To this end, we consider the conditional likelihood as a stochastic...... process in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of β is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We...

  11. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß......'X(t) is fractional of order d-b. The parameters d and b satisfy either d=b=1/2, d=b=1/2, or d=d0=b=1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2=b=d=d1 for any d1=d0. To this end, we consider the conditional likelihood as a stochastic process...... in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of ß is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We also find...

  12. Empirical likelihood method for non-ignorable missing data problems.

    Science.gov (United States)

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  13. An Algorithm for Detecting the Onset of Muscle Contraction Based on Generalized Likelihood Ratio Test%采用广义似然比检测的肌肉收缩起始时刻判断算法

    Institute of Scientific and Technical Information of China (English)

    徐琦; 程俊银; 周慧; 杨磊

    2012-01-01

    The surface electromyography (sEMG) of stump in the amputee is often applied to control the action of myoelectric prosthesis. According to the sEMG signals with low Signal to Noise Ratio (SNR) recorded from the stump muscle,a generalized likelihood ratio (GLR) method was proposed to detect the onset of muscle contraction,where a decision threshold was related with the SNR of sEMG signals,an off-line simulation method was used to determine the relationship between them. For the simulated sEMG signals with a given SNR,the different thresholds were tested,the optimal threshold could be obtained when the detection accuracy was optimized. As a result,the fitted curve was achieved to describe the relationship of the SNR and the decision threshold. Then,the sEMG signals are analyzed on-line by the GLR test for the onset detection of muscle contractions,while the decision threshold corresponding with the SNR was chosen based on the fitted curve. Compared with the classical algorithms,with the simulated sEMG traces,the error mean and standard deviation for estimating the muscle contraction onset were reduced at least 35% and 43% respectively; based on the real EMG signals,the error mean and standard deviation of the onset estimate were separately not less than 29% and 23%. Therefore,the proposed algorithm based on GLR test for the onset detection of muscle contraction was more accurate than other methods,while the SNR of sEMG signals was low.%肌电假肢利用残肢残存肌肉的肌电信号实行对假肢的控制.对于低信噪比的残肢表面肌电,本研究采用广义似然比检测方法判断肌肉收缩起始时刻,其中判别阈值与肌电信号信噪比有关.针对不同信噪比的模拟肌电信号,采用离线仿真方法得到肌肉收缩起始时刻检测误差最小的判别阈值,得到信噪比-经验阈值拟合曲线,确定信噪比与阈值的对应关系;根据肌电信噪比由阈值拟合曲线得到判别阈值,采用似然比检测算法

  14. Regions of constrained maximum likelihood parameter identifiability

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1975-01-01

    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  15. Composite likelihood method for inferring local pedigrees

    Science.gov (United States)

    Nielsen, Rasmus

    2017-01-01

    Pedigrees contain information about the genealogical relationships among individuals and are of fundamental importance in many areas of genetic studies. However, pedigrees are often unknown and must be inferred from genetic data. Despite the importance of pedigree inference, existing methods are limited to inferring only close relationships or analyzing a small number of individuals or loci. We present a simulated annealing method for estimating pedigrees in large samples of otherwise seemingly unrelated individuals using genome-wide SNP data. The method supports complex pedigree structures such as polygamous families, multi-generational families, and pedigrees in which many of the member individuals are missing. Computational speed is greatly enhanced by the use of a composite likelihood function which approximates the full likelihood. We validate our method on simulated data and show that it can infer distant relatives more accurately than existing methods. Furthermore, we illustrate the utility of the method on a sample of Greenlandic Inuit. PMID:28827797

  16. Factors Associated with Young Adults’ Pregnancy Likelihood

    Science.gov (United States)

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  17. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    Science.gov (United States)

    2010-01-01

    Background Classification of bacteria within the genus Brucella has been difficult due in part to considerable genomic homogeneity between the different species and biovars, in spite of clear differences in phenotypes. Therefore, many different methods have been used to assess Brucella taxonomy. In the current work, we examine 32 sequenced genomes from genus Brucella representing the six classical species, as well as more recently described species, using bioinformatical methods. Comparisons were made at the level of genomic DNA using oligonucleotide based methods (Markov chain based genomic signatures, genomic codon and amino acid frequencies based comparisons) and proteomes (all-against-all BLAST protein comparisons and pan-genomic analyses). Results We found that the oligonucleotide based methods gave different results compared to that of the proteome based methods. Differences were also found between the oligonucleotide based methods used. Whilst the Markov chain based genomic signatures grouped the different species in genus Brucella according to host preference, the codon and amino acid frequencies based methods reflected small differences between the Brucella species. Only minor differences could be detected between all genera included in this study using the codon and amino acid frequencies based methods. Proteome comparisons were found to be in strong accordance with current Brucella taxonomy indicating a remarkable association between gene gain or loss on one hand and mutations in marker genes on the other. The proteome based methods found greater similarity between Brucella species and Ochrobactrum species than between species within genus Agrobacterium compared to each other. In other words, proteome comparisons of species within genus Agrobacterium were found to be more diverse than proteome comparisons between species in genus Brucella and genus Ochrobactrum. Pan-genomic analyses indicated that uptake of DNA from outside genus Brucella appears to be

  18. Maximum likelihood continuity mapping for fraud detection

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  19. Likelihood methods and classical burster repetition

    CERN Document Server

    Graziani, C; Graziani, Carlo; Lamb, Donald Q

    1995-01-01

    We develop a likelihood methodology which can be used to search for evidence of burst repetition in the BATSE catalog, and to study the properties of the repetition signal. We use a simplified model of burst repetition in which a number N_{\\rm r} of sources which repeat a fixed number of times N_{\\rm rep} are superposed upon a number N_{\\rm nr} of non-repeating sources. The instrument exposure is explicitly taken into account. By computing the likelihood for the data, we construct a probability distribution in parameter space that may be used to infer the probability that a repetition signal is present, and to estimate the values of the repetition parameters. The likelihood function contains contributions from all the bursts, irrespective of the size of their positional errors --- the more uncertain a burst's position is, the less constraining is its contribution. Thus this approach makes maximal use of the data, and avoids the ambiguities of sample selection associated with data cuts on error circle size. We...

  20. Database likelihood ratios and familial DNA searching

    CERN Document Server

    Slooten, Klaas

    2012-01-01

    Familial Searching is the process of searching in a DNA database for relatives of a given individual. It is well known that in order to evaluate the genetic evidence in favour of a certain given form of relatedness between two individuals, one needs to calculate the appropriate likelihood ratio, which is in this context called a Kinship Index. Suppose that the database contains, for a given type of relative, at most one related individual. Given prior probabilities of being the relative for all persons in the database, we derive the likelihood ratio for each database member in favour of being that relative. This likelihood ratio takes all the Kinship Indices between target and members of the database into account. We also compute the corresponding posterior probabilities. We then discuss two ways of selecting a subset from the database that contains the relative with a known probability, or at least a useful lower bound thereof. We discuss the relation between these approaches and illustrate them with Familia...

  1. Conditional likelihood inference in a case- cohort design: an application to haplotype analysis.

    Science.gov (United States)

    Saarela, Olli; Kulathinal, Sangita

    2007-01-01

    Under the setting of a case-cohort design, covariate values are ascertained for a smaller subgroup of the original study cohort which typically is a representative sample from a population. Individuals with a specific event outcome are selected to the second stage study group as cases and an additional subsample is selected to act as a control group. We carry out analysis of such a design using conditional likelihood where the likelihood expression is conditioned on the ascertainment to the second stage study group. Such likelihood expression involves the probability of ascertainment which need to be expressed in terms of the model parameters. We present examples of conditional likelihoods for models for categorical response and time-to-event response. We show that the conditional likelihood inference leads to valid estimation of population parameters. Our application considers joint estimation of haplotype-event association parameters and population haplotype frequencies based on SNP genotype data collected under a case-cohort design.

  2. Exact likelihood-free Markov chain Monte Carlo for elliptically contoured distributions.

    Science.gov (United States)

    Muchmore, Patrick; Marjoram, Paul

    2015-08-01

    Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased estimator of the likelihood can have a stationary distribution identical to that of a chain based on exact likelihood calculations. In this paper we develop such an estimator for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this estimator, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a likelihood based chain, but does not require explicit likelihood calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible.

  3. Accurate determination of phase arrival times using autoregressive likelihood estimation

    Directory of Open Access Journals (Sweden)

    G. Kvaerna

    1994-06-01

    Full Text Available We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation of about 0.05 s for P-phases and 0.15 0.20 s for S phases. These accuracies are as good as for analyst picks, and are considerably better than the accuracies of the current onset procedure used for processing of regional array data at NORSAR. In another application, we have developed a generic procedure to reestimate the onsets of all types of first arriving P phases. By again applying the autoregressive likelihood technique, we have obtained automatic onset times of a quality such that 70% of the automatic picks are within 0.1 s of the best manual pick. For the onset time procedure currently used at NORSAR, the corresponding number is 28%. Clearly, automatic reestimation of first arriving P onsets using the autoregressive likelihood technique has the potential of significantly reducing the retiming efforts of the analyst.

  4. Corporate governance effect on financial distress likelihood: Evidence from Spain

    Directory of Open Access Journals (Sweden)

    Montserrat Manzaneque

    2016-01-01

    Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.

  5. Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation

    CERN Document Server

    Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K

    2015-01-01

    We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...

  6. QR-code Recognition Method in Super-resolution Image Synthesis Based on Maximum Likelihood%基于最大似然法的超分辨率合成的 QR 条码识别方法

    Institute of Scientific and Technical Information of China (English)

    梁华刚; 程加乐; 孙小喃

    2015-01-01

    With the advantage of big storing capacity in small space ,strong fault tolerance ,high decoding reliability , the QR‐code has a wide application in the areas of circulation and logistics .However ,in the actual identification ,due to the limitation of different factors ,such as the low resolution of the barcode captured by camera ,there are also many problems and difficulties with the identification work .A novel low resolution QR‐code recognition method based on super‐resolution image processing technology is presented in this paper .The simple equipment such as the mobile phone is used to shoot the barcode video with a lower resolution ,through nonlinear fitting on each frame by a maximum likelihood algorithm .Then su‐per‐resolution barcode image is synthesized through the binary feature of QR‐codes to improve the identification accuracy of the low resolution barcode video .The experiment shows that this method can recognize the QR‐code which the traditional method couldn't and the accurate recognition rate in the low‐resolution barcode for 55 × 55 pixels is above 85% .The average recognition accuracy is improved by 10% .%QR 条码具有小存储空间、大容量、容错能力强和译码可靠性高等优点,在流通和物流等领域被广泛应用。但是在实际识别时,由于受拍摄条码分辨率低等因素制约,存在识别困难的问题。论文提出一种基于超分辨率图像处理技术的低分辨率 QR 码识别方法,对手机等简易设备拍摄的低分辨率条码视频,采用最大似然算法对各帧图像进行非线性拟合,然后通过 QR 码的二值特性合成超分辨率条码图像,可以提高低分辨率条码视频的识别准确率。实验证明,该方法能解决传统方法不能识别的 QR 条码,使像素为55×55的低分辨率条码识别成功率达到85%以上,平均识别准确率提高10%。

  7. Hybrid pairwise likelihood analysis of animal behavior experiments.

    Science.gov (United States)

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  8. Maximum Likelihood Position Location with a Limited Number of References

    Directory of Open Access Journals (Sweden)

    D. Munoz-Rodriguez

    2011-04-01

    Full Text Available A Position Location (PL scheme for mobile users on the outskirts of coverage areas is presented. The proposedmethodology makes it possible to obtain location information with only two land-fixed references. We introduce ageneral formulation and show that maximum-likelihood estimation can provide adequate PL information in thisscenario. The Root Mean Square (RMS error and error-distribution characterization are obtained for differentpropagation scenarios. In addition, simulation results and comparisons to another method are provided showing theaccuracy and the robustness of the method proposed. We study accuracy limits of the proposed methodology fordifferent propagation environments and show that even in the case of mismatch in the error variances, good PLestimation is feasible.

  9. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  10. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-01-07

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  11. CMB Power Spectrum Likelihood with ILC

    CERN Document Server

    Dick, Jason; Delabrouille, Jacques

    2012-01-01

    We extend the ILC method in harmonic space to include the error in its CMB estimate. This allows parameter estimation routines to take into account the effect of the foregrounds as well as the errors in their subtraction in conjunction with the ILC method. Our method requires the use of a model of the foregrounds which we do not develop here. The reduction of the foreground level makes this method less sensitive to unaccounted for errors in the foreground model. Simulations are used to validate the calculations and approximations used in generating this likelihood function.

  12. LIKEDM: Likelihood calculator of dark matter detection

    Science.gov (United States)

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  13. Multiplicative earthquake likelihood models incorporating strain rates

    Science.gov (United States)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.

  14. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  15. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...

  16. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  17. A Comparison of Routing Protocol for WSNs: Redundancy Based Approach A Comparison of Routing Protocol for WSNs: Redundancy Based Approach

    Directory of Open Access Journals (Sweden)

    Anand Prakash

    2014-03-01

    Full Text Available Wireless Sensor Networks (WSNs with their dynamic applications gained a tremendous attention of researchers. Constant monitoring of critical situations attracted researchers to utilize WSNs at vast platforms. The main focus in WSNs is to enhance network localization as much as one could, for efficient and optimal utilization of resources. Different approaches based upon redundancy are proposed for optimum functionality. Localization is always related with redundancy of sensor nodes deployed at remote areas for constant and fault tolerant monitoring. In this work, we propose a comparison of classic flooding and the gossip protocol for homogenous networks which enhances stability and throughput quiet significantly.  

  18. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  19. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  20. Comparison of surface and hydrogel-based protein microchips.

    Science.gov (United States)

    Zubtsov, D A; Savvateeva, E N; Rubina, A Yu; Pan'kov, S V; Konovalova, E V; Moiseeva, O V; Chechetkin, V R; Zasedatelev, A S

    2007-09-15

    Protein microchips are designed for high-throughput evaluation of the concentrations and activities of various proteins. The rapid advance in microchip technology and a wide variety of existing techniques pose the problem of unified approach to the assessment and comparison of different platforms. Here we compare the characteristics of protein microchips developed for quantitative immunoassay with those of antibodies immobilized on glass surfaces and in hemispherical gel pads. Spotting concentrations of antibodies used for manufacturing of microchips of both types and concentrations of antigen in analyte solution were identical. We compared the efficiency of antibody immobilization, the intensity of fluorescence signals for both direct and sandwich-type immunoassays, and the reaction-diffusion kinetics of the formation of antibody-antigen complexes for surface and gel-based microchips. Our results demonstrate higher capacity and sensitivity for the hydrogel-based protein microchips, while fluorescence saturation kinetics for the two types of microarrays was comparable.

  1. Choosing the observational likelihood in state-space stock assessment models

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    2016-01-01

    Data used in stock assessment models result from combinations of biological, ecological, fishery, and sampling processes. Since different types of errors propagate through these processes it can be difficult to identify a particular family of distributions for modelling errors on observations...... a priori. By implementing several observational likelihoods, modelling both numbers- and proportions-at-age, in an age based state-space stock assessment model, we compare the model fit for each choice of likelihood along with the implications for spawning stock biomass and average fishing mortality. We...... propose using AIC intervals based on fitting the full observational model for comparing different observational likelihoods. Using data from four stocks, we show that the model fit is improved by modelling the correlation of observations within years. However, the best choice of observational likelihood...

  2. Choosing the observational likelihood in state-space stock assessment models

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    2017-01-01

    propose using AIC intervals based on fitting the full observational model for comparing different observational likelihoods. Using data from four stocks, we show that the model fit is improved by modelling the correlation of observations within years. However, the best choice of observational likelihood......Data used in stock assessment models result from combinations of biological, ecological, fishery, and sampling processes. Since different types of errors propagate through these processes it can be difficult to identify a particular family of distributions for modelling errors on observations...... a priori. By implementing several observational likelihoods, modelling both numbers- and proportions-at-age, in an age based state-space stock assessment model, we compare the model fit for each choice of likelihood along with the implications for spawning stock biomass and average fishing mortality. We...

  3. Empirical likelihood ratio tests for multivariate regression models

    Institute of Scientific and Technical Information of China (English)

    WU Jianhong; ZHU Lixing

    2007-01-01

    This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.

  4. Evaluation of infrared spectra analyses using a likelihood ratio approach: A practical example of spray paint examination.

    Science.gov (United States)

    Muehlethaler, Cyril; Massonnet, Geneviève; Hicks, Tacha

    2016-03-01

    Depending on the forensic disciplines and on the analytical techniques used, Bayesian methods of evaluation have been applied both as a two-step approach (first comparison, then evaluation) and as a continuous approach (comparison and evaluation in one step). However in order to use the continuous approach, the measurements have to be reliably summarized as a numerical value linked to the property of interest, which occurrence can be determined (e.g., refractive index measurement of glass samples). For paint traces analyzed by Fourier transform infrared spectroscopy (FTIR) however, the statistical comparison of the spectra is generally done by a similarity measure (e.g., Pearson correlation, Euclidean distance). Although useful, these measures cannot be directly associated to frequencies of occurrence of the chemical composition (binders, extenders, pigments). The continuous approach as described above is not possible, and a two-step evaluation, 1) comparison of the spectra and 2) evaluation of the results, is therefore the common practice reported in most of the laboratories. Derived from a practical question that arose during casework, a way of integrating the similarity measure between spectra into a continuous likelihood ratio formula was explored. This article proposes the use of a likelihood ratio approach with the similarity measure of infrared spectra of spray paints based on distributions of sub-populations given by the color and composition of spray paint cans. Taking into account not only the rarity of paint composition, but also the "quality" of the analytical match provides a more balanced evaluation given source or activity level propositions. We will demonstrate also that a joint statistical-expertal methodology allows for a more transparent evaluation of the results and makes a better use of current knowledge. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  6. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  7. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    Science.gov (United States)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  8. Maximum likelihood estimation in constrained parameter spaces for mixtures of factor analyzers

    OpenAIRE

    Greselin, Francesca; Ingrassia, Salvatore

    2013-01-01

    Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of high-dimensional data. According to the likelihood approach in data modeling, it is well known that the unconstrained log-likelihood function may present spurious maxima and singularities and this is due to specific patterns of the estimated covariance structure, when their determinant approaches 0. To reduce such drawbacks, in this paper we introduce a procedure for the parameter estimati...

  9. Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

    Science.gov (United States)

    Boedeker, Peter

    2017-01-01

    Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…

  10. A polytomous conditional likelihood approach for combining matched and unmatched case-control studies.

    Science.gov (United States)

    Gebregziabher, Mulugeta; Guimaraes, Paulo; Cozen, Wendy; Conti, David V

    2010-04-30

    In genetic association studies it is becoming increasingly imperative to have large sample sizes to identify and replicate genetic effects. To achieve these sample sizes, many research initiatives are encouraging the collaboration and combination of several existing matched and unmatched case-control studies. Thus, it is becoming more common to compare multiple sets of controls with the same case group or multiple case groups to validate or confirm a positive or negative finding. Usually, a naive approach of fitting separate models for each case-control comparison is used to make inference about disease-exposure association. But, this approach does not make use of all the observed data and hence could lead to inconsistent results. The problem is compounded when a common case group is used in each case-control comparison. An alternative to fitting separate models is to use a polytomous logistic model but, this model does not combine matched and unmatched case-control data. Thus, we propose a polytomous logistic regression approach based on a latent group indicator and a conditional likelihood to do a combined analysis of matched and unmatched case-control data. We use simulation studies to evaluate the performance of the proposed method and a case-control study of multiple myeloma and Inter-Leukin-6 as an example. Our results indicate that the proposed method leads to a more efficient homogeneity test and a pooled estimate with smaller standard error.

  11. Groups, information theory, and Einstein's likelihood principle

    Science.gov (United States)

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  12. Dishonestly increasing the likelihood of winning

    Directory of Open Access Journals (Sweden)

    Shaul Shalvi

    2012-05-01

    Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.

  13. A comparison of food crispness based on the cloud model.

    Science.gov (United States)

    Wang, Minghui; Sun, Yonghai; Hou, Jumin; Wang, Xia; Bai, Xue; Wu, Chunhui; Yu, Libo; Yang, Jie

    2017-08-23

    The cloud model is a typical model which transforms the qualitative concept into the quantitative description. The cloud model has been used less extensively in texture studies before. The purpose of this study was to apply the cloud model in food crispness comparison. The acoustic signals of carrots, white radishes, potatoes, Fuji apples, and crystal pears were recorded during compression. And three time-domain signal characteristics were extracted, including sound intensity, maximum short-time frame energy, and waveform index. The three signal characteristics and the cloud model were used to compare the crispness of the samples mentioned above. The crispness based on the Ex value of the cloud model, in a descending order, was carrot > potato > white radish > Fuji apple > crystal pear. To verify the results of the acoustic signals, mechanical measurement and sensory evaluation were conducted. The results of the two verification experiments confirmed the feasibility of the cloud model. The microstructures of the five samples were also analyzed. The microstructure parameters were negatively related with crispness (p cloud model method can be used for crispness comparison of different kinds of foods. The method is more accurate than the traditional methods such as mechanical measurement and sensory evaluation. The cloud model method can also be applied to other texture studies extensively. © 2017 Wiley Periodicals, Inc.

  14. Early Course in Obstetrics Increases Likelihood of Practice Including Obstetrics.

    Science.gov (United States)

    Pearson, Jennifer; Westra, Ruth

    2016-10-01

    The Department of Family Medicine and Community Health Duluth has offered the Obstetrical Longitudinal Course (OBLC) as an elective for first-year medical students since 1999. The objective of the OBLC Impact Survey was to assess the effectiveness of the course over the past 15 years. A Qualtrics survey was emailed to participants enrolled in the course from 1999-2014. Data was compiled for the respondent group as a whole as well as four cohorts based on current level of training/practice. Cross-tabulations with Fisher's exact test were applied and odds ratios calculated for factors affecting likelihood of eventual practice including obstetrics. Participation in the OBLC was successful in increasing exposure, awareness, and comfort in caring for obstetrical patients and feeling more prepared for the OB-GYN Clerkship. A total of 50.5% of course participants felt the OBLC influenced their choice of specialty. For participants who are currently physicians, 51% are practicing family medicine with obstetrics or OB-GYN. Of the cohort of family physicians, 65.2% made the decision whether to include obstetrics in practice during medical school. Odds ratios show the likelihood of practicing obstetrics is higher when participants have completed the OBLC and also are practicing in a rural community. Early exposure to obstetrics, as provided by the OBLC, appears to increase the likelihood of including obstetrics in practice, especially if eventual practice is in a rural community. This course may be a tool to help create a pipeline for future rural family physicians providing obstetrical care.

  15. The Multi-Mission Maximum Likelihood framework (3ML)

    CERN Document Server

    Vianello, Giacomo; Younk, Patrick; Tibaldo, Luigi; Burgess, James M; Ayala, Hugo; Harding, Patrick; Hui, Michelle; Omodei, Nicola; Zhou, Hao

    2015-01-01

    Astrophysical sources are now observed by many different instruments at different wavelengths, from radio to high-energy gamma-rays, with an unprecedented quality. Putting all these data together to form a coherent view, however, is a very difficult task. Each instrument has its own data format, software and analysis procedure, which are difficult to combine. It is for example very challenging to perform a broadband fit of the energy spectrum of the source. The Multi-Mission Maximum Likelihood framework (3ML) aims to solve this issue, providing a common framework which allows for a coherent modeling of sources using all the available data, independent of their origin. At the same time, thanks to its architecture based on plug-ins, 3ML uses the existing official software of each instrument for the corresponding data in a way which is transparent to the user. 3ML is based on the likelihood formalism, in which a model summarizing our knowledge about a particular region of the sky is convolved with the instrument...

  16. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  17. Image-based spectral transmission estimation using "sensitivity comparison".

    Science.gov (United States)

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2017-01-20

    Although digital cameras have been used for spectral reflectance estimation, transmission measurement has rarely been considered in studies. This study presents a method named sensitivity comparison (SC) for spectral transmission estimation. The method needs neither a priori knowledge from the samples nor statistical information of a given reflectance dataset. As with spectrophotometers, the SC method needs one shot for calibration and another shot for measurement. The method exploits the sensitivity of the camera in the absence and presence of transparent colored objects for transmission estimation. 138 Kodak Wratten Gelatin filter transmissions were used for controlling the proposed method. Using modeling of the imaging system in different levels of noise, the performance of the proposed method was compared with a training-based Matrix R method. For checking the performance of the SC method in practice, 33 manmade colored transparent films were used in a conventional three-channel camera. The method generated promising results using different error metrics.

  18. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado;

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  19. Comparison of Estimators for Exponentiated Inverted Weibull Distribution Based on Grouped Data Amal

    OpenAIRE

    2014-01-01

    In many situations, instead of complete sample, data is available only in grouped form. This paper presents estimation of population parameters for the exponentiated inverted Weibull distribution based on grouped data with equi and unequi-spaced grouping. Several alternative estimation schemes, such as, the method of maximum likelihood, least lines, least squares, minimum chi-square, and modified minimum chi-square are considered. Since the different methods of estimation didn...

  20. Initial application of the maximum likelihood earthquake location method to early warning system in South Korea

    Science.gov (United States)

    Sheen, D. H.; Seong, Y. J.; Park, J. H.; Lim, I. S.

    2015-12-01

    From the early of this year, the Korea Meteorological Administration (KMA) began to operate the first stage of an earthquake early warning system (EEWS) and provide early warning information to the general public. The earthquake early warning system (EEWS) in the KMA is based on the Earthquake Alarm Systems version 2 (ElarmS-2), developed at the University of California Berkeley. This method estimates the earthquake location using a simple grid search algorithm that finds the location with the minimum variance of the origin time on successively finer grids. A robust maximum likelihood earthquake location (MAXEL) method for early warning, based on the equal differential times of P arrivals, was recently developed. The MAXEL has been demonstrated to be successful in determining the event location, even when an outlier is included in the small number of P arrivals. This presentation details the application of the MAXEL to the EEWS of the KMA, its performance evaluation over seismic networks in South Korea with synthetic data, and comparison of statistics of earthquake locations based on the ElarmS-2 and the MAXEL.

  1. Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.

    Science.gov (United States)

    Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng

    2017-04-17

    Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and

  2. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  3. REDUCING THE LIKELIHOOD OF LONG TENNIS MATCHES

    Directory of Open Access Journals (Sweden)

    Tristan Barnett

    2006-12-01

    Full Text Available Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match

  4. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  5. Maximum likelihood estimates of pairwise rearrangement distances.

    Science.gov (United States)

    Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R

    2017-06-21

    Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Multiclass cancer classification based on gene expression comparison

    Science.gov (United States)

    Yang, Sitan; Naiman, Daniel Q.

    2016-01-01

    As the complexity and heterogeneity of cancer is being increasingly appreciated through genomic analyses, microarray-based cancer classification comprising multiple discriminatory molecular markers is an emerging trend. Such multiclass classification problems pose new methodological and computational challenges for developing novel and effective statistical approaches. In this paper, we introduce a new approach for classifying multiple disease states associated with cancer based on gene expression profiles. Our method focuses on detecting small sets of genes in which the relative comparison of their expression values leads to class discrimination. For an m-class problem, the classification rule typically depends on a small number of m-gene sets, which provide transparent decision boundaries and allow for potential biological interpretations. We first test our approach on seven common gene expression datasets and compare it with popular classification methods including support vector machines and random forests. We then consider an extremely large cohort of leukemia cancer to further assess its effectiveness. In both experiments, our method yields comparable or even better results to benchmark classifiers. In addition, we demonstrate that our approach can integrate pathway analysis of gene expression to provide accurate and biological meaningful classification. PMID:24918456

  7. Empirical Likelihood for Mixed-effects Error-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui

    2009-01-01

    This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.

  8. PERFORMANCE COMPARISON OF CELL-BASED AND PACKET-BASED SWITCHING SCHEMES FOR SHARED MEMORY SWITCHES

    Institute of Scientific and Technical Information of China (English)

    Xi Kang; Ge Ning; Feng Chongxi

    2004-01-01

    Shared Memory (SM) switches are widely used for its high throughput, low delay and efficient use of memory. This paper compares the performance of two prominent switching schemes of SM packet switches: Cell-Based Switching (CBS) and Packet-Based Switching (PBS).Theoretical analysis is carried out to draw qualitative conclusion on the memory requirement,throughput and packet delay of the two schemes. Furthermore, simulations are carried out to get quantitative results of the performance comparison under various system load, traffic patterns,and memory sizes. Simulation results show that PBS has the advantage of shorter time delay while CBS has lower memory requirement and outperforms in throughput when the memory size is limited. The comparison can be used for tradeoff between performance and complexity in switch design.

  9. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...... d and b, and prove that they converge in distribution. We use the results to prove consistency of the maximum likelihood estimator for d,b in a large compact subset of {1/2...

  10. Likelihood ratios: Clinical application in day-to-day practice

    Directory of Open Access Journals (Sweden)

    Parikh Rajul

    2009-01-01

    Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.

  11. Exergetic comparison of two KRW-based IGCC power plants

    Science.gov (United States)

    Tsatsaronis, G.; Tawfik, T.; Lin, L.; Gallaspy, D. T.

    1994-04-01

    In studies supported by the U.S. Department of Energy and the Electric Power Research Institute, several design configurations of Kellogg-Rust-Westinghouse (KRW)-based Integrated Gasification-Combined-Cycle (IGCC) power plants were developed. Two of these configurations are compared here from the exergetic viewpoint. The first design configuration (case 1) uses an air-blown KRW gasifier and hot gas cleanup while the second configuration (reference case) uses an oxygen-blown KRW gasifier and cold gas cleanup. Each case uses two General Electric MS7001F advanced combustion turbines. The exergetic comparison identifies the causes of performance difference between the two cases: differences in the exergy destruction of the gasification system, the gas turbine system, and the gas cooling process, as well as differences in the exergy loss accompanying the solids to disposal stream. The potential for using (a) oxygen-blown versus air-blown-KRW gasifiers, and (b) hot gas versus cold gas cleanup processes was evaluated. The results indicate that, among the available options, an oxygen-blown KRW gasifier using in-bed desulfurization combined with an optimized hot gas cleanup process has the largest potential for providing performance improvements.

  12. Comparison of wind turbines based on power curve analysis

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-01

    In the study measured power curves for 46 wind turbines were analyzed with the purpose to establish the basis for a consistent comparison of the efficiency of the wind turbines. Emphasis is on wind turbines above 500 kW rated power, with power curves measured after 1994 according to international recommendations. The available power curves fulfilling these requirements were smoothened according to a procedure developed for the purpose in such a way that the smoothened power curves are equally representative as the measured curves. The resulting smoothened power curves are presented in a standardized format for the subsequent processing. Using wind turbine data from the power curve documentation the analysis results in curves for specific energy production (kWh/M{sup 2}/yr) versus specific rotor load (kW/M{sup 2}) for a range of mean wind speeds. On this basis generalized curves for specific annual energy production versus specific rotor load are established for a number of generalized wind turbine concepts. The 46 smoothened standardized power curves presented in the report, the procedure developed to establish them, and the results of the analysis based on them aim at providers of measured power curves as well as users of them including manufacturers, advisors and decision makers. (au)

  13. Transfer Entropy as a Log-likelihood Ratio

    CERN Document Server

    Barnett, Lionel

    2012-01-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the neurosciences, econometrics and the analysis of complex system dynamics in diverse fields. We show that for a class of parametrised partial Markov models for jointly stochastic processes in discrete time, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. The result generalises the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression. In the general case, an asymptotic $\\chi^2$ distribution for the model transfer entropy estimator is established.

  14. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  15. Empirical likelihood for balanced ranked-set sampled data

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Ranked-set sampling(RSS) often provides more efficient inference than simple random sampling(SRS).In this article,we propose a systematic nonparametric technique,RSS-EL,for hypoth-esis testing and interval estimation with balanced RSS data using empirical likelihood(EL).We detail the approach for interval estimation and hypothesis testing in one-sample and two-sample problems and general estimating equations.In all three cases,RSS is shown to provide more efficient inference than SRS of the same size.Moreover,the RSS-EL method does not require any easily violated assumptions needed by existing rank-based nonparametric methods for RSS data,such as perfect ranking,identical ranking scheme in two groups,and location shift between two population distributions.The merit of the RSS-EL method is also demonstrated through simulation studies.

  16. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  17. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  18. Comparison of hybridization-based and sequencing-based gene expression technologies on biological replicates

    Directory of Open Access Journals (Sweden)

    Cepko Connie L

    2007-06-01

    Full Text Available Abstract Background High-throughput systems for gene expression profiling have been developed and have matured rapidly through the past decade. Broadly, these can be divided into two categories: hybridization-based and sequencing-based approaches. With data from different technologies being accumulated, concerns and challenges are raised about the level of agreement across technologies. As part of an ongoing large-scale cross-platform data comparison framework, we report here a comparison based on identical samples between one-dye DNA microarray platforms and MPSS (Massively Parallel Signature Sequencing. Results The DNA microarray platforms generally provided highly correlated data, while moderate correlations between microarrays and MPSS were obtained. Disagreements between the two types of technologies can be attributed to limitations inherent to both technologies. The variation found between pooled biological replicates underlines the importance of exercising caution in identification of differential expression, especially for the purposes of biomarker discovery. Conclusion Based on different principles, hybridization-based and sequencing-based technologies should be considered complementary to each other, rather than competitive alternatives for measuring gene expression, and currently, both are important tools for transcriptome profiling.

  19. Atmospheric circulation classification comparison based on wildfires in Portugal

    Science.gov (United States)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  20. Cogeneration based on gasified biomass - a comparison of concepts

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, Fredrik

    1999-01-01

    In this report, integration of drying and gasification of biomass into cogeneration power plants, comprising gas turbines, is investigated. The thermodynamic cycles considered are the combined cycle and the humid air turbine cycle. These are combined with either pressurised or near atmospheric gasification, and steam or exhaust gas dryer, in a number of combinations. An effort is made to facilitate a comparison of the different concepts by using, and presenting, similar assumptions and input data for all studied systems. The resulting systems are modelled using the software package ASPEN PLUS{sup TM}, and for each system both the electrical efficiency and the fuel utilisation are calculated. The investigation of integrated gasification combined cycles (IGCC), reveals that systems with pressurised gasification have a potential for electrical efficiencies approaching 45% (LHV). That is 4 - 5 percentage points higher than the corresponding systems with near atmospheric gasification. The type of dryer in the system mainly influences the fuel utilisation, with an advantage of approximately 8 percentage points (LHV) for the steam dryer. The resulting values of fuel utilisation for the IGCC systems are in the range of 78 - 94% (LHV). The results for the integrated gasification humid air turbine systems (IGHAT) indicate that electrical efficiencies close to the IGCC are achievable, provided combustion of the fuel gas in highly humidified air is feasible. Reaching a high fuel utilisation is more difficult for this concept, unless the temperature levels in the district heating network are low. For comparison a conventional cogeneration plant, based on a CFB boiler and a steam turbine (Rankine cycle), is also modelled in ASPEN PLUS{sup TM}. The IGCC and IGHAT show electrical efficiencies in the range of 37 - 45% (LHV), compared with a calculated value of 31% (LHV) for the Rankine cycle cogeneration plant. Apart from the electrical efficiency, also a high value of fuel

  1. Maximum likelihood pedigree reconstruction using integer linear programming.

    Science.gov (United States)

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible.

  2. 改进的基于最大似然估计的多通道InSAR高程重建方法%Improved Multichannel InSAR Height Reconstruction Method Based on Maximum Likelihood Estimation

    Institute of Scientific and Technical Information of China (English)

    袁志辉; 邓云凯; 李飞; 王宇; 柳罡

    2013-01-01

    In the application of getting the earth surface’s Digital Elevation Model (DEM) through InSAR technology, multichannel (multi-frequency or multi-baseline) InSAR technique can be employed to improve the mapping ability for complex areas with high slopes or strong height discontinuities, and solve the ambiguity problem which existed in the situation of single baseline. This paper compares the performance of Maxmum Likelihood (ML) estimation techniques with Maximum A Posteriori (MAP) estimation techniques, and adds two steps of bad pixels judgment and weighted filtering after the ML estimation. Bad pixels judgment is completed through cluster analysis and the relationship between adjacent pixels. A special weighted mean filter is used to remove the bad pixels. In this way, the advantage of the ML method’s good efficiency is kept, and the accuracy of DEM also is improved. Simulation results indicate that this method can not only keep good accuracy but also improve greatly the computation efficiency under the same condition, which is advantageous for processing large scale of data sets.%在通过InSAR技术获取地表数字高程模型(DEM)的应用中,为了提高该技术对大斜坡或突变等复杂地形的测绘能力,解决单基线情况下的高度模糊问题,可以利用多通道(多频率或多基线)InSAR技术实现。该文比较了最大似然估计法(ML)和最大后验概率估计法(MAP)的性能,并在最大似然估计法的基础上增加了坏点判断和加权均值滤波的环节,通过聚类分析和与相邻点的关系来判断目标像素是否为误差比较大的坏点,然后再利用加权均值滤波的方法将这些坏点剔除。这样,既保留了ML估计法速度快的特点,又提高了DEM的精度。仿真结果表明,在相同条件下,该方法既能保持较好的精度,同时又大大提高了算法的运行效率,非常有利于大规模数据的处理。

  3. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  4. Derivation of the Mass Distribution of Extrasolar Planets with MAXLIMA - a Maximum Likelihood Algorithm

    CERN Document Server

    Zucker, S W; Zucker, Shay; Mazeh, Tsevi

    2001-01-01

    We construct a maximum-likelihood algorithm - MAXLIMA, to derive the mass distribution of the extrasolar planets when only the minimum masses are observed. The algorithm derives the distribution by solving a numerically stable set of equations, and does not need any iteration or smoothing. Based on 50 minimum masses, MAXLIMA yields a distribution which is approximately flat in log M, and might rise slightly towards lower masses. The frequency drops off very sharply when going to masses higher than 10 Jupiter masses, although we suspect there is still a higher mass tail that extends up to probably 20 Jupiter masses. We estimate that 5% of the G stars in the solar neighborhood have planets in the range of 1-10 Jupiter masses with periods shorter than 1500 days. For comparison we present the mass distribution of stellar companions in the range of 100--1000 Jupiter masses, which is also approximately flat in log M. The two populations are separated by the "brown-dwarf desert", a fact that strongly supports the id...

  5. Bayesian penalized log-likelihood ratio approach for dose response clinical trial studies.

    Science.gov (United States)

    Tang, Yuanyuan; Cai, Chunyan; Sun, Liangrui; He, Jianghua

    2017-02-13

    In literature, there are a few unified approaches to test proof of concept and estimate a target dose, including the multiple comparison procedure using modeling approach, and the permutation approach proposed by Klingenberg. We discuss and compare the operating characteristics of these unified approaches and further develop an alternative approach in a Bayesian framework based on the posterior distribution of a penalized log-likelihood ratio test statistic. Our Bayesian approach is much more flexible to handle linear or nonlinear dose-response relationships and is more efficient than the permutation approach. The operating characteristics of our Bayesian approach are comparable to and sometimes better than both approaches in a wide range of dose-response relationships. It yields credible intervals as well as predictive distribution for the response rate at a specific dose level for the target dose estimation. Our Bayesian approach can be easily extended to continuous, categorical, and time-to-event responses. We illustrate the performance of our proposed method with extensive simulations and Phase II clinical trial data examples.

  6. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.;

    2014-01-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...

  7. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  8. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    Energy Technology Data Exchange (ETDEWEB)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Hogg, David W., E-mail: iczekala@cfa.harvard.edu [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY, 10003 (United States)

    2015-10-20

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  9. Constructing a Flexible Likelihood Function for Spectroscopic Inference

    Science.gov (United States)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.

    2015-10-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  10. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    . Conclusions This work provides a novel, accelerated version of a likelihood-based parameter estimation method that can be readily applied to stochastic biochemical systems. In addition, our results suggest opportunities for added efficiency improvements that will further enhance our ability to mechanistically simulate biological processes.

  11. P300 amplitude variations, prior probabilities, and likelihoods: A Bayesian ERP study.

    Science.gov (United States)

    Kopp, Bruno; Seer, Caroline; Lange, Florian; Kluytmans, Anouck; Kolossa, Antonio; Fingscheidt, Tim; Hoijtink, Herbert

    2016-10-01

    The capability of the human brain for Bayesian inference was assessed by manipulating probabilistic contingencies in an urn-ball task. Event-related potentials (ERPs) were recorded in response to stimuli that differed in their relative frequency of occurrence (.18 to .82). A veraged ERPs with sufficient signal-to-noise ratio (relative frequency of occurrence > .5) were used for further analysis. Research hypotheses about relationships between probabilistic contingencies and ERP amplitude variations were formalized as (in-)equality constrained hypotheses. Conducting Bayesian model comparisons, we found that manipulations of prior probabilities and likelihoods were associated with separately modifiable and distinct ERP responses. P3a amplitudes were sensitive to the degree of prior certainty such that higher prior probabilities were related to larger frontally distributed P3a waves. P3b amplitudes were sensitive to the degree of likelihood certainty such that lower likelihoods were associated with larger parietally distributed P3b waves. These ERP data suggest that these antecedents of Bayesian inference (prior probabilities and likelihoods) are coded by the human brain.

  12. Comparison on Integer Wavelet Transforms in Spherical Wavelet Based Image Based Relighting

    Institute of Scientific and Technical Information of China (English)

    WANGZe; LEEYin; LEUNGChising; WONGTientsin; ZHUYisheng

    2003-01-01

    To provide a good quality rendering in the Image based relighting (IBL) system, tremendous reference images under various illumination conditions are needed. Therefore data compression is essential to enable interactive action. And the rendering speed is another crucial consideration for real applications. Based on Spherical wavelet transform (SWT), this paper presents a quick representation method with Integer wavelet transform (IWT) for the IBL system. It focuses on comparison on different IWTs with the Embedded zerotree wavelet (EZW) used in the IBL system. The whole compression procedure contains two major compression steps. Firstly, SWT is applied to consider the correlation among different reference images. Secondly, the SW transformed images are compressed with IWT based image compression approach. Two IWTs are used and good results are showed in the simulations.

  13. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    Science.gov (United States)

    Valleriani, Angelo

    2016-08-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.

  14. Eliciting information from experts on the likelihood of rapid climate change.

    Science.gov (United States)

    Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil

    2005-12-01

    The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations.

  15. FastTree 2--approximately maximum-likelihood trees for large alignments.

    Directory of Open Access Journals (Sweden)

    Morgan N Price

    Full Text Available BACKGROUND: We recently described FastTree, a tool for inferring phylogenies for alignments with up to hundreds of thousands of sequences. Here, we describe improvements to FastTree that improve its accuracy without sacrificing scalability. METHODOLOGY/PRINCIPAL FINDINGS: Where FastTree 1 used nearest-neighbor interchanges (NNIs and the minimum-evolution criterion to improve the tree, FastTree 2 adds minimum-evolution subtree-pruning-regrafting (SPRs and maximum-likelihood NNIs. FastTree 2 uses heuristics to restrict the search for better trees and estimates a rate of evolution for each site (the "CAT" approximation. Nevertheless, for both simulated and genuine alignments, FastTree 2 is slightly more accurate than a standard implementation of maximum-likelihood NNIs (PhyML 3 with default settings. Although FastTree 2 is not quite as accurate as methods that use maximum-likelihood SPRs, most of the splits that disagree are poorly supported, and for large alignments, FastTree 2 is 100-1,000 times faster. FastTree 2 inferred a topology and likelihood-based local support values for 237,882 distinct 16S ribosomal RNAs on a desktop computer in 22 hours and 5.8 gigabytes of memory. CONCLUSIONS/SIGNIFICANCE: FastTree 2 allows the inference of maximum-likelihood phylogenies for huge alignments. FastTree 2 is freely available at http://www.microbesonline.org/fasttree.

  16. A conditional likelihood is required to estimate the selection coefficient in ancient DNA.

    Science.gov (United States)

    Valleriani, Angelo

    2016-08-16

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.

  17. Comparison between artificial neural networks and maximum likelihood classification in digital soil mapping Comparação entre redes neurais artificiais e classificação por máxima verossimilhança no mapeamento digital de solos

    Directory of Open Access Journals (Sweden)

    César da Silva Chagas

    2013-04-01

    Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.O levantamento de solos é a principal fonte de informação espacial sobre solos para diferentes usos

  18. Elaboration likelihood and the perceived value of labels

    DEFF Research Database (Denmark)

    Poulsen, Carsten Stig; Juhl, Hans Jørn

    2001-01-01

    In this paper the increasingly popular method of choice based on conjoint analysis is used and data are collected by pairwise comparisons. A latent class model is formulated allowing that the resulting data can be analyzed with segmentation in mind. The empirical study is on food labeling...

  19. Elaboration likelihood and the perceived value of labels

    DEFF Research Database (Denmark)

    Poulsen, Carsten Stig; Juhl, Hans Jørn

    2001-01-01

    In this paper the increasingly popular method of choice based on conjoint analysis is used and data are collected by pairwise comparisons. A latent class model is formulated allowing that the resulting data can be analyzed with segmentation in mind. The empirical study is on food labeling and the...

  20. Evidence for extra radiation? Profile likelihood versus Bayesian posterior

    CERN Document Server

    Hamann, Jan

    2011-01-01

    A number of recent analyses of cosmological data have reported hints for the presence of extra radiation beyond the standard model expectation. In order to test the robustness of these claims under different methods of constructing parameter constraints, we perform a Bayesian posterior-based and a likelihood profile-based analysis of current data. We confirm the presence of a slight discrepancy between posterior- and profile-based constraints, with the marginalised posterior preferring higher values of the effective number of neutrino species N_eff. This can be traced back to a volume effect occurring during the marginalisation process, and we demonstrate that the effect is related to the fact that cosmic microwave background (CMB) data constrain N_eff only indirectly via the redshift of matter-radiation equality. Once present CMB data are combined with external information about, e.g., the Hubble parameter, the difference between the methods becomes small compared to the uncertainty of N_eff. We conclude tha...

  1. Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems

    Directory of Open Access Journals (Sweden)

    Li Bing

    2014-01-01

    Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.

  2. Empirical Likelihood Approach for Treatment Effect in Pretest-Posttest Trial

    Institute of Scientific and Technical Information of China (English)

    Qixiang HE

    2012-01-01

    The empirical likelihood approach is suggested to the pretest-posttest trial based on the constrains,which we construct to summarize all the given information.The author obtains a log-empirical likelihood ratio test statistic that has a standard chi-squared limiting distribution.Thus,in making inferences,there is no need to estimate variance explicitly,and inferential procedures are easier to implement.Simulation results show that the approach of this paper is more efficient compared with ANCOVA Ⅱ due to the sufficient and appropriate use of information.

  3. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    Science.gov (United States)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  4. A New Speaker Verification Method with GlobalSpeaker Model and Likelihood Score Normalization

    Institute of Scientific and Technical Information of China (English)

    张怡颖; 朱小燕; 张钹

    2000-01-01

    In this paper a new text-independent speaker verification method GSMSV is proposed based on likelihood score normalization. In this novel method a global speaker model is established to represent the universal features of speech and normalize the likelihood score. Statistical analysis demonstrates that this normalization method can remove common factors of speech and bring the differences between speakers into prominence. As a result the equal error rate is decreased significantly,verification procedure is accelerated and system adaptability to speaking speed is improved.

  5. Inferring fixed effects in a mixed linear model from an integrated likelihood

    DEFF Research Database (Denmark)

    Gianola, Daniel; Sorensen, Daniel

    2008-01-01

    of all nuisances, viewing random effects and variance components as missing data. In a simulation of a grazing trial, the procedure was compared with four widely used estimators of fixed effects in mixed models, and found to be competitive. An analysis of body weight in freshwater crayfish was conducted......A new method for likelihood-based inference of fixed effects in mixed linear models, with variance components treated as nuisance parameters, is presented. The method uses uniform-integration of the likelihood; the implementation employs the expectation-maximization (EM) algorithm for elimination...

  6. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  7. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    Science.gov (United States)

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  8. Blind Detection of Ultra-faint Streaks with a Maximum Likelihood Method

    CERN Document Server

    Dawson, William A; Kamath, Chandrika

    2016-01-01

    We have developed a maximum likelihood source detection method capable of detecting ultra-faint streaks with surface brightnesses approximately an order of magnitude fainter than the pixel level noise. Our maximum likelihood detection method is a model based approach that requires no a priori knowledge about the streak location, orientation, length, or surface brightness. This method enables discovery of typically undiscovered objects, and enables the utilization of low-cost sensors (i.e., higher-noise data). The method also easily facilitates multi-epoch co-addition. We will present the results from the application of this method to simulations, as well as real low earth orbit observations.

  9. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  10. LASER: A Maximum Likelihood Toolkit for Detecting Temporal Shifts in Diversification Rates From Molecular Phylogenies

    Directory of Open Access Journals (Sweden)

    Daniel L. Rabosky

    2006-01-01

    Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.

  11. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  12. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    CERN Document Server

    Valleriani, Angelo

    2016-01-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to end anywhere. Based on the Moran model of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation ...

  13. Term Based Comparison Metrics for Controlled and Uncontrolled Indexing Languages

    Science.gov (United States)

    Good, B. M.; Tennis, J. T.

    2009-01-01

    Introduction: We define a collection of metrics for describing and comparing sets of terms in controlled and uncontrolled indexing languages and then show how these metrics can be used to characterize a set of languages spanning folksonomies, ontologies and thesauri. Method: Metrics for term set characterization and comparison were identified and…

  14. Expert elicitation on ultrafine particles: likelihood of health effects and causal pathways

    Directory of Open Access Journals (Sweden)

    Brunekreef Bert

    2009-07-01

    Full Text Available Abstract Background Exposure to fine ambient particulate matter (PM has consistently been associated with increased morbidity and mortality. The relationship between exposure to ultrafine particles (UFP and health effects is less firmly established. If UFP cause health effects independently from coarser fractions, this could affect health impact assessment of air pollution, which would possibly lead to alternative policy options to be considered to reduce the disease burden of PM. Therefore, we organized an expert elicitation workshop to assess the evidence for a causal relationship between exposure to UFP and health endpoints. Methods An expert elicitation on the health effects of ambient ultrafine particle exposure was carried out, focusing on: 1 the likelihood of causal relationships with key health endpoints, and 2 the likelihood of potential causal pathways for cardiac events. Based on a systematic peer-nomination procedure, fourteen European experts (epidemiologists, toxicologists and clinicians were selected, of whom twelve attended. They were provided with a briefing book containing key literature. After a group discussion, individual expert judgments in the form of ratings of the likelihood of causal relationships and pathways were obtained using a confidence scheme adapted from the one used by the Intergovernmental Panel on Climate Change. Results The likelihood of an independent causal relationship between increased short-term UFP exposure and increased all-cause mortality, hospital admissions for cardiovascular and respiratory diseases, aggravation of asthma symptoms and lung function decrements was rated medium to high by most experts. The likelihood for long-term UFP exposure to be causally related to all cause mortality, cardiovascular and respiratory morbidity and lung cancer was rated slightly lower, mostly medium. The experts rated the likelihood of each of the six identified possible causal pathways separately. Out of these

  15. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  16. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  17. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...

  18. Choosing the observational likelihood in state-space stock assessment models

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    By implementing different observational likelihoods in a state-space age-based stock assessment model, we are able to compare the goodness-of-fit and effects on estimated fishing mortallity for different model choices. Model fit is improved by estimating suitable correlations between agegroups. We...

  19. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  20. Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper

    2016-01-01

    We propose a novel Power Spectral Density (PSD) estimator for multi-microphone systems operating in reverberant and noisy conditions. The estimator is derived using the maximum likelihood approach and is based on a blocked and pre-whitened additive signal model. The intended application......, the difference between algorithms was found to be statistically significant only in some of the experimental conditions....

  1. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd

  2. On penalized likelihood estimation for a non-proportional hazards regression model.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  3. An Item Attribute Specification Method Based On the Likelihood D2 Statistic%使用似然比D2统计量的题目属性定义方法

    Institute of Scientific and Technical Information of China (English)

    喻晓锋; 罗照盛; 高椿雷; 李喻骏; 王睿; 王钰彤

    2015-01-01

    The Q-matrix is a very important component of cognitive diagnostic assessments, and it maps attributes to items. Cognitive diagnostic assessments infer the attribute mastery pattern of respondents based on item responses. In a cognitive diagnostic assessment, item responses are observable, whereas respondents’ attribute mastery pattern is potentially, but not immediately observable. The Q-matrix plays the role of a bridge in cognitive diagnostic assessments. Therefore, Q-matrix impacts the reliability and validity of cognitive diagnostic assessments greatly. Research on how the errors of Q-matrix affect parameter estimation and classification accuracy showed that the Q-matrix from experts’ definition or experience was easily affected by experts’ personal judgment, leading to a misspecified Q-matrix. Thus, it is important to find more objective Q-matrix inference methods. This paper was inspired by Liu, Xu and Ying’s (2012) algorithm and the item-data fit statisticG2 in the item response theory framework. Further research on the Q-matrix inference, an online Q-matrix estimation method based on the statisticD2was proposed in the present study. Those items which are the base of the online algorithm are called as base items, and it is assumed that the base items are correctly pre-specified. The online estimation algorithm can jointly estimate item parameters and item attribute vectors in an incrementally manner. In the simulation studies, we considered the DINA model with different Q-matrix (attribute-number is 3, 4 and 5), different sample size (400, 500, 800 and 1000), and different number of correct items (8, 9, 10, 11 and 12) in the initial Q-matrix. The attribute mastery pattern of the sample followed a uniform distribution, and the item parameters followed a uniform distribution with interval [0.05, 0.25]. The results indicated that: when the number of base items was not too small, the online estimation algorithm with theD2 statistic could estimate the

  4. Probability calculus for quantitative HREM. Part II: entropy and likelihood concepts.

    Science.gov (United States)

    Möbus, G

    2000-12-01

    The technique of extracting atomic coordinates from HREM images by R-factor refinement via iterative simulation and global optimisation is described in the context of probability density estimations for unknown parameters. In the second part of this two-part paper we discuss in comparison maximum likelihood and maximum entropy techniques with respect to their suitability of application within HREM. We outline practical difficulties of likelihood estimation and present a synthesis of two point-cloud techniques as a recommendable solution. This R-factor refinement with independent Monte-Carlo error calibration is a highly versatile method which allows adaptation to the special needs of HREM. Unlike simple text-book estimation methods, there is no requirement here on the noise being additive, uncorrelated, or Gaussian. It also becomes possible to account for a subset of systematic errors.

  5. Maximum Likelihood Learning of Conditional MTE Distributions

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE specifications and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables....... Finally, experimental results demonstrate the applicability of the learning procedure as well as the expressive power of the conditional MTE distribution....

  6. Maximum likelihood estimation for social network dynamics

    NARCIS (Netherlands)

    Snijders, T.A.B.; Koskinen, J.; Schweinberger, M.

    2010-01-01

    A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The m

  7. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing

    2015-02-12

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  8. Competency-based models of learning for engineers: a comparison

    Science.gov (United States)

    Lunev, Alexander; Petrova, Irina; Zaripova, Viktoria

    2013-10-01

    One of the goals of higher professional education is to develop generic student competencies across a variety of disciplines that play a crucial role in education and that provide wider opportunities for graduates in finding good jobs and more chance of promotion. In this article a list of generic competencies developed in Russian universities is compared with a similar list developed by a consortium of Russian and European universities (project TUNING-RUSSIA). Then there is a second comparison with a list of competencies taken from the CDIO Syllabus. This comparison indicates the degree of similarity among the lists and the possible convergence among universities all over the world. The results are taken from a survey carried out among Russian employers, academics, and graduates. The survey asked to rate each listed competence by its importance and the degree of achieving goals in the process of the education.

  9. Maximum Likelihood Factor Structure of the Family Environment Scale.

    Science.gov (United States)

    Fowler, Patrick C.

    1981-01-01

    Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)

  10. Young adult consumers' media usage and online purchase likelihood

    African Journals Online (AJOL)

    Young adult consumers' media usage and online purchase likelihood. ... in new media applications such as the internet, email, blogging, twitter and social networks. ... Convenience sampling resulted in 1 298 completed questionnaires.

  11. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Empirical Likelihood Ratio Confidence Interval for Positively Associated Series

    Institute of Scientific and Technical Information of China (English)

    Jun-jian Zhang

    2007-01-01

    Empirical likelihood is discussed by using the blockwise technique for strongly stationary,positively associated random variables.Our results show that the statistics is asymptotically chi-square distributed and the corresponding confidence interval can be constructed.

  13. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  14. Evaluating assessment quality in competence-based education: A qualitative comparison of two frameworks

    NARCIS (Netherlands)

    Baartman, Liesbeth; Bastiaens, Theo; Kirschner, Paul A.; Van der Vleuten, Cees

    2009-01-01

    Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. M. (2007). Evaluation assessment quality in competence-based education: A qualitative comparison of two frameworks. Educational Research Review, 2, 114-129.

  15. A penalized likelihood approach for bivariate conditional normal models for dynamic co-expression analysis.

    Science.gov (United States)

    Chen, Jun; Xie, Jichun; Li, Hongzhe

    2011-03-01

    Gene co-expressions have been widely used in the analysis of microarray gene expression data. However, the co-expression patterns between two genes can be mediated by cellular states, as reflected by expression of other genes, single nucleotide polymorphisms, and activity of protein kinases. In this article, we introduce a bivariate conditional normal model for identifying the variables that can mediate the co-expression patterns between two genes. Based on this model, we introduce a likelihood ratio (LR) test and a penalized likelihood procedure for identifying the mediators that affect gene co-expression patterns. We propose an efficient computational algorithm based on iterative reweighted least squares and cyclic coordinate descent and have shown that when the tuning parameter in the penalized likelihood is appropriately selected, such a procedure has the oracle property in selecting the variables. We present simulation results to compare with existing methods and show that the LR-based approach can perform similarly or better than the existing method of liquid association and the penalized likelihood procedure can be quite effective in selecting the mediators. We apply the proposed method to yeast gene expression data in order to identify the kinases or single nucleotide polymorphisms that mediate the co-expression patterns between genes.

  16. Sieve likelihood ratio inference on general parameter space

    Institute of Scientific and Technical Information of China (English)

    SHEN Xiaotong; SHI Jian

    2005-01-01

    In this paper,a theory on sieve likelihood ratio inference on general parameter spaces(including infinite dimensional) is studied.Under fairly general regularity conditions,the sieve log-likelihood ratio statistic is proved to be asymptotically x2 distributed,which can be viewed as a generalization of the well-known Wilks' theorem.As an example,a emiparametric partial linear model is investigated.

  17. A notion of graph likelihood and an infinite monkey theorem

    CERN Document Server

    Banerji, Christopher R S; Severini, Simone

    2013-01-01

    We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.

  18. A notion of graph likelihood and an infinite monkey theorem

    Science.gov (United States)

    Banerji, Christopher R. S.; Mansour, Toufik; Severini, Simone

    2014-01-01

    We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.

  19. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  20. Parametric likelihood inference for interval censored competing risks data.

    Science.gov (United States)

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  1. Theoretical comparison between solar combisystems based on bikini tanks and tank-in-tank solar combisystems

    DEFF Research Database (Denmark)

    Yazdanshenas, Eshagh; Furbo, Simon; Bales, Chris

    2008-01-01

    Theoretical investigations have shown that solar combisystems based on bikini tanks for low energy houses perform better than solar domestic hot water systems based on mantle tanks. Tank-in-tank solar combisystems are also attractive from a thermal performance point of view. In this paper......, theoretical comparisons between solar combisystems based on bikini tanks and tank-in-tank solar combisystems are presented....

  2. ON THE LIKELIHOOD OF PLANET FORMATION IN CLOSE BINARIES

    Energy Technology Data Exchange (ETDEWEB)

    Jang-Condell, Hannah, E-mail: hjangcon@uwyo.edu [Department of Physics and Astronomy, University of Wyoming, 1000 East University, Department 3905, Laramie, WY 82071 (United States)

    2015-02-01

    To date, several exoplanets have been discovered orbiting stars with close binary companions (a ≲ 30 AU). The fact that planets can form in these dynamically challenging environments implies that planet formation must be a robust process. The initial protoplanetary disks in these systems from which planets must form should be tidally truncated to radii of a few AU, which indicates that the efficiency of planet formation must be high. Here, we examine the truncation of circumstellar protoplanetary disks in close binary systems, studying how the likelihood of planet formation is affected over a range of disk parameters. If the semimajor axis of the binary is too small or its eccentricity is too high, the disk will have too little mass for planet formation to occur. However, we find that the stars in the binary systems known to have planets should have once hosted circumstellar disks that were capable of supporting planet formation despite their truncation. We present a way to characterize the feasibility of planet formation based on binary orbital parameters such as stellar mass, companion mass, eccentricity, and semimajor axis. Using this measure, we can quantify the robustness of planet formation in close binaries and better understand the overall efficiency of planet formation in general.

  3. Likelihood analysis of the Local Group acceleration revisited

    CERN Document Server

    Ciecielag, P

    2004-01-01

    We reexamine likelihood analyzes of the Local Group (LG) acceleration, paying particular attention to nonlinear effects. Under the approximation that the joint distribution of the LG acceleration and velocity is Gaussian, two quantities describing nonlinear effects enter these analyzes. The first one is the coherence function, i.e. the cross-correlation coefficient of the Fourier modes of gravity and velocity fields. The second one is the ratio of velocity power spectrum to gravity power spectrum. To date, in all analyzes of the LG acceleration the second quantity was not accounted for. Extending our previous work, we study both the coherence function and the ratio of the power spectra. With the aid of numerical simulations we obtain expressions for the two as functions of wavevector and \\sigma_8. Adopting WMAP's best determination of \\sigma_8, we estimate the most likely value of the parameter \\beta and its errors. As the observed values of the LG velocity and gravity, we adopt respectively a CMB-based estim...

  4. A Maximum Likelihood Approach to Least Absolute Deviation Regression

    Directory of Open Access Journals (Sweden)

    Yinbo Li

    2004-09-01

    Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.

  5. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  6. Maximum-likelihood estimation of circle parameters via convolution.

    Science.gov (United States)

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.

  7. A Comparison of FFD-based Nonrigid Registration and AAMs Applied to Myocardial Perfusion MRI

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Stegmann, Mikkel Bille; Ersbøll, Bjarne Kjær

    2006-01-01

    Little work has been done on comparing the performance of statistical model-based approaches and nonrigid registration algorithms. This paper deals with the qualitative and quantitative comparison of active appearance models (AAMs) and a nonrigid registration algorithm based on free...... and qualitative and quantitative comparisons are provided. The quantitative comparison is obtained by an analysis of variance of landmark errors, i.e. point to point and point to curve errors. Even though the FFD-based approach does not include a training phase it gave similar accuracy as the AAMs in terms......-form deformations (FFDs). AAMs are known to be much faster than nonrigid registration algorithms. On the other hand nonrigid registration algorithms are independent of a training set as required to build an AAM. To obtain a further comparison of the two methods, they are both applied to automatically register multi...

  8. Rate of strong consistency of the maximum quasi-likelihood estimator in quasi-likelihood nonlinear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.

  9. STATISTICAL ANALYSIS OF STROKE BASED BENGALI SCRIPT IN COMPARISON WITH CURVATURE BASED TELUGU SCRIPT

    Directory of Open Access Journals (Sweden)

    Nadimapalli Ganapathi Raju,

    2011-04-01

    Full Text Available Indian languages are broadly divided into two categories. The Northern Languages are stroke based and southern are cursive scripts. Though, all Indian scripts are derived from ancient Brahmin script, the language patterns in terms of lexical, structural, lexico grammatical and morphological variations makes every languagecomplex in nature. Unicode standard is used to understand all the Indic scripts. Analyzing the structure of the scripts and structural togetherness has been addressed in this paper between Telugu and Bengali languages. The statistical analysis of code point level comparison has been made between the two languages. This helps us inunderstanding the global occurrence of the code points and the structural and morphological variations. Corpus for both languages has been collected from six different categories of various Telugu and Bengali news groups to cover the entire language vocabulary. An indigenous tool has been developed using Python scripting Language for this purpose, which can be used for linguistic analysis of all Indian languages.

  10. Developing Computer Network Based on EIGRP Performance Comparison and OSPF

    Directory of Open Access Journals (Sweden)

    Lalu Zazuli Azhar Mardedi

    2015-09-01

    Full Text Available One of the computer network systems technologies that are growing rapidly at this time is internet. In building the networks, a routing mechanism is needed to integrate the entire computer with a high degree of flexibility. Routing is a major part in giving a performance to the network. With many existing routing protocols, network administrators need a reference comparison of the performance of each type of the routing protocol. One such routing is Enhanced Interior Gateway Routing Protocol (EIGRP and Open Shortest Path First (OSPF. This paper only focuses on the performance of both the routing protocol on the network topology hybrid. Network services existing internet access speeds average of 8.0 KB/sec and 2 MB bandwidth. A backbone network is used by two academies, they are Academy of Information Management and Computer (AIMC and Academy of Secretary and Management (ASM, with 2041 clients and it caused slow internet access. To solve the problems, the analysis and comparison of performance between the Enhanced Interior Gateway Routing Protocol (EIGRP and Open Shortest Path First (OSPF will be applied. Simulation software Cisco Packet Tracer 6.0.1 is used to get the value and to verify the results of its use.

  11. A Comparison of Parametric, Semi-nonparametric, Adaptive, and Nonparametric Cointegration Tests

    NARCIS (Netherlands)

    Boswijk, H. Peter; Lucas, Andre; Taylor, Nick

    1999-01-01

    This paper provides an extensive Monte-Carlo comparison of severalcontemporary cointegration tests. Apart from the familiar Gaussian basedtests of Johansen, we also consider tests based on non-Gaussianquasi-likelihoods. Moreover, we compare the performance of these parametrictests with tests that es

  12. A Comparison of Parametric, Semi-nonparametric, Adaptive, and Nonparametric Cointegration Tests

    NARCIS (Netherlands)

    Boswijk, H. Peter; Lucas, Andre; Taylor, Nick

    1999-01-01

    This paper provides an extensive Monte-Carlo comparison of severalcontemporary cointegration tests. Apart from the familiar Gaussian basedtests of Johansen, we also consider tests based on non-Gaussianquasi-likelihoods. Moreover, we compare the performance of these parametrictests with tests that

  13. A Comparison of Parametric, Semi-nonparametric, Adaptive, and Nonparametric Cointegration Tests

    NARCIS (Netherlands)

    Boswijk, H. Peter; Lucas, Andre; Taylor, Nick

    1999-01-01

    This paper provides an extensive Monte-Carlo comparison of severalcontemporary cointegration tests. Apart from the familiar Gaussian basedtests of Johansen, we also consider tests based on non-Gaussianquasi-likelihoods. Moreover, we compare the performance of these parametrictests with tests that es

  14. Likelihood ratio based verification in high dimensional spaces

    NARCIS (Netherlands)

    Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk

    2013-01-01

    The increase of the dimensionality of data sets often lead to problems during estimation, which are denoted as the curse of dimensionality. One of the problems of Second Order Statistics (SOS) estimation in high dimensional data is that the resulting covariance matrices are not full rank, so their i

  15. Likelihood ratio based verification in high dimensional spaces

    NARCIS (Netherlands)

    Hendrikse, A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    The increase of the dimensionality of data sets often lead to problems during estimation, which are denoted as the curse of dimensionality. One of the problems of Second Order Statistics (SOS) estimation in high dimensional data is that the resulting covariance matrices are not full rank, so their

  16. Performance comparison of Zr-based and Bi-based erbium-doped fiber amplifiers.

    Science.gov (United States)

    Paul, M C; Harun, S W; Huri, N A D; Hamzah, A; Das, S; Pal, M; Bhadra, S K; Ahmad, H; Yoo, S; Kalita, M P; Boyland, A J; Sahu, J K

    2010-09-01

    In this Letter, we present a comprehensive comparison of the performance of a zirconia-based erbium-doped fiber amplifier (Zr-EDFA) and a bismuth-based erbium-doped fiber amplifier (Bi-EDFA). The experimental results reveal that a Zr-EDFA can achieve comparable performance to the conventional Bi-EDFA for C-band and L-band operations. With a combination of both Zr and Al, we could achieve a high erbium-doping concentration of about 2800 ppm (parts per million) in the glass host without any phase separations of rare earths. The Zr-based erbium-doped fiber (Zr-EDF) was fabricated using in a ternary glass host, zirconia-yttria-aluminum codoped silica fiber through a solution-doping technique along with modified chemical vapor deposition. At a high input signal of 0 dBm, a flat gain at average value of 13 dB is obtained with a gain variation of less than 2 dB within the wavelength region of 1530-1575 nm and using 2 m of Zr-EDF and 120 mW pump power. The noise figures are less than 9.2 at this wavelength region. It was found that a Zr-EDFA can achieve even better flat-gain value and bandwidth as well as lower noise figure than the conventional Bi-EDFA.

  17. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY

    Science.gov (United States)

    Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.

    2015-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198

  18. Conceptual Comparison of Population Based Metaheuristics for Engineering Problems

    Directory of Open Access Journals (Sweden)

    Oluwole Adekanmbi

    2015-01-01

    Full Text Available Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  19. Finite Element Response Sensitivity Analysis: a comparison between force-based and Displacement-Based Frame Element Models

    OpenAIRE

    Barbato, Michele; Conte, J P

    2005-01-01

    This paper focuses on a comparison between displacement-based and force-based elements for static and dynamic response sensitivity analysis of frame type structures. Previous research has shown that force-based frame elements are superior to classical displacement-based elements enabling, at no significant additional computational costs, a drastic reduction in the number of elements required for a given level of accuracy in the simulated response. The present work shows that this advantage of...

  20. Exclusion probabilities and likelihood ratios with applications to kinship problems.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2014-05-01

    In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.

  1. Hail the impossible: p-values, evidence, and likelihood.

    Science.gov (United States)

    Johansson, Tobias

    2011-04-01

    Significance testing based on p-values is standard in psychological research and teaching. Typically, research articles and textbooks present and use p as a measure of statistical evidence against the null hypothesis (the Fisherian interpretation), although using concepts and tools based on a completely different usage of p as a tool for controlling long-term decision errors (the Neyman-Pearson interpretation). There are four major problems with using p as a measure of evidence and these problems are often overlooked in the domain of psychology. First, p is uniformly distributed under the null hypothesis and can therefore never indicate evidence for the null. Second, p is conditioned solely on the null hypothesis and is therefore unsuited to quantify evidence, because evidence is always relative in the sense of being evidence for or against a hypothesis relative to another hypothesis. Third, p designates probability of obtaining evidence (given the null), rather than strength of evidence. Fourth, p depends on unobserved data and subjective intentions and therefore implies, given the evidential interpretation, that the evidential strength of observed data depends on things that did not happen and subjective intentions. In sum, using p in the Fisherian sense as a measure of statistical evidence is deeply problematic, both statistically and conceptually, while the Neyman-Pearson interpretation is not about evidence at all. In contrast, the likelihood ratio escapes the above problems and is recommended as a tool for psychologists to represent the statistical evidence conveyed by obtained data relative to two hypotheses. © 2010 The Author. Scandinavian Journal of Psychology © 2010 The Scandinavian Psychological Associations.

  2. Constructing diagnostic likelihood: clinical decisions using subjective versus statistical probability.

    Science.gov (United States)

    Kinnear, John; Jackson, Ruth

    2017-07-01

    Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pstatistical probability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  3. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  4. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    CERN Document Server

    Hall, Alex

    2016-01-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...

  5. Supervisor Autonomy and Considerate Leadership Style are Associated with Supervisors’ Likelihood to Accommodate Back Injured Workers

    Science.gov (United States)

    McGuire, Connor; Kristman, Vicki L; Williams-Whitt, Kelly; Reguly, Paula; Shaw, William; Soklaridis, Sophie

    2015-01-01

    PURPOSE To determine the association between supervisors’ leadership style and autonomy and supervisors’ likelihood of supporting job accommodations for back-injured workers. METHODS A cross-sectional study of supervisors from Canadian and US employers was conducted using a web-based, self-report questionnaire that included a case vignette of a back-injured worker. Autonomy and two dimensions of leadership style (considerate and initiating structure) were included as exposures. The outcome, supervisors’ likeliness to support job accommodation, was measured with the Job Accommodation Scale. We conducted univariate analyses of all variables and bivariate analyses of the JAS score with each exposure and potential confounding factor. We used multivariable generalized linear models to control for confounding factors. RESULTS A total of 796 supervisors participated. Considerate leadership style (β= .012; 95% CI: .009–.016) and autonomy (β= .066; 95% CI: .025–.11) were positively associated with supervisors’ likelihood to accommodate after adjusting for appropriate confounding factors. An initiating structure leadership style was not significantly associated with supervisors’ likelihood to accommodate (β = .0018; 95% CI: −.0026–.0061) after adjusting for appropriate confounders. CONCLUSIONS Autonomy and a considerate leadership style were positively associated with supervisors’ likelihood to accommodate a back-injured worker. Providing supervisors with more autonomy over decisions of accommodation and developing their considerate leadership style may aid in increasing work accommodation for back-injured workers and preventing prolonged work disability. PMID:25595332

  6. A Bivariate Pseudo-Likelihood for Incomplete Longitudinal Binary Data with Nonignorable Non-monotone Missingness

    Science.gov (United States)

    Sinha, Sanjoy K.; Troxel, Andrea B.; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Molenberghs, Geert; Ibrahim, Joseph G.

    2010-01-01

    Summary For analyzing longitudinal binary data with nonignorable and non-monotone missing responses, a full likelihood method is complicated algebraically, and often requires intensive computation, especially when there are many follow-up times. As an alternative, a pseudo-likelihood approach has been proposed in the literature under minimal parametric assumptions. This formulation only requires specification of the marginal distributions of the responses and missing data mechanism, and uses an independence working assumption. However, this estimator can be inefficient for estimating both time-varying and time-stationary effects under moderate to strong within-subject associations among repeated responses. In this article, we propose an alternative estimator, based on a bivariate pseudo-likelihood, and demonstrate in simulations that the proposed method can be much more efficient than the previous pseudo-likelihood obtained under the assumption of independence. We illustrate the method using longitudinal data on CD4 counts from two clinical trials of HIV-infected patients. PMID:21155748

  7. Using empirical likelihood to combine data: application to food risk assessment.

    Science.gov (United States)

    Crépet, Amélie; Harari-Kermadec, Hugo; Tressou, Jessica

    2009-03-01

    This article introduces an original methodology based on empirical likelihood, which aims at combining different food contamination and consumption surveys to provide risk managers with a risk measure, taking into account all the available information. This risk index is defined as the probability that exposure to a contaminant exceeds a safe dose. It is naturally expressed as a nonlinear functional of the different consumption and contamination distributions, more precisely as a generalized U-statistic. This nonlinearity and the huge size of the data sets make direct computation of the problem unfeasible. Using linearization techniques and incomplete versions of the U-statistic, a tractable "approximated" empirical likelihood program is solved yielding asymptotic confidence intervals for the risk index. An alternative "Euclidean likelihood program" is also considered, replacing the Kullback-Leibler distance involved in the empirical likelihood by the Euclidean distance. Both methodologies are tested on simulated data and applied to assess the risk due to the presence of methyl mercury in fish and other seafood.

  8. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    Science.gov (United States)

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  9. A comparison of liquid-based cytology with conventional cytology.

    Science.gov (United States)

    Celik, C; Gezginç, K; Toy, H; Findik, S; Yilmaz, O

    2008-02-01

    To evaluate the 2 methods of cytologic screening to detect abnormalities of the cervical epithelium. This study with 3 groups of women was performed at Selcuk University Meram Medical School between January 2004 and March 2006. In one group (paired sample for specimen collection) women were screened with conventional cytology; in another group (paired sample for specimen collection) they were screened with liquid-based cytology; and in the third group (split sample for specimen collection) they were screened by both methods. The rate of unsatisfactory results was lower in the liquid-based than in the conventional cytology group (6.1% vs. 2.6%; Pliquid-based method, but the difference was not statistically significant. Also, no statistically significant differences between liquid-based and conventional cytology were observed in the detection of other epithelial abnormalities (P>0.05). The liquid-based and conventional cytology methods were found to be equivalent in the detection of cervical epithelial abnormalities.

  10. Tradespace Assessment: Thermal Strain Modeling Comparison Of Multiple Clothing Configurations Based On Different Environmental Conditions

    Science.gov (United States)

    2017-02-01

    The investigators have adhered to the policies for protection of human subjects as prescribed in Army Regulation 70-25 and SECNAVINST 3900.39D, and... body temperatures (Tc) of 38.5°C in three environmental conditions 10 5 Comparison of Undergarment vs. No Undergarment Configurations (Group 2...predicted times to reach critical core body temperatures (Tc) of 38.5°C in three environmental conditions 10 6 Comparison of ACS-Based Configurations

  11. Comparison of Bobath based and movement science based treatment for stroke: a randomised controlled trial.

    Science.gov (United States)

    van Vliet, P M; Lincoln, N B; Foxall, A

    2005-04-01

    Bobath based (BB) and movement science based (MSB) physiotherapy interventions are widely used for patients after stroke. There is little evidence to suggest which is most effective. This single-blind randomised controlled trial evaluated the effect of these treatments on movement abilities and functional independence. A total of 120 patients admitted to a stroke rehabilitation ward were randomised into two treatment groups to receive either BB or MSB treatment. Primary outcome measures were the Rivermead Motor Assessment and the Motor Assessment Scale. Secondary measures assessed functional independence, walking speed, arm function, muscle tone, and sensation. Measures were performed by a blinded assessor at baseline, and then at 1, 3, and 6 months after baseline. Analysis of serial measurements was performed to compare outcomes between the groups by calculating the area under the curve (AUC) and inserting AUC values into Mann-Whitney U tests. Comparison between groups showed no significant difference for any outcome measures. Significance values for the Rivermead Motor Assessment ranged from p = 0.23 to p = 0.97 and for the Motor Assessment Scale from p = 0.29 to p = 0.87. There were no significant differences in movement abilities or functional independence between patients receiving a BB or an MSB intervention. Therefore the study did not show that one approach was more effective than the other in the treatment of stroke patients.

  12. Distributed hydrological models: comparison between TOPKAPI, a physically based model and TETIS, a conceptually based model

    Science.gov (United States)

    Ortiz, E.; Guna, V.

    2009-04-01

    The present work aims to carry out a comparison between two distributed hydrological models, the TOPKAPI (Ciarapica and Todini, 1998; Todini and Ciarapica, 2001) and TETIS (Vélez, J. J.; Vélez J. I. and Francés, F, 2002) models, obtaining the hydrological solution computed on the basis of the same storm events. The first model is physically based and the second one is conceptually based. The analysis was performed on the 21,4 km2 Goodwin Creek watershed, located in Panola County, Mississippi. This watershed extensively monitored by the Agricultural Research Service (ARS) National Sediment Laboratory (NSL) has been chosen because it offers a complete database compiling precipitation (16 rain gauges), runoff (6 discharge stations) and GIS data. Three storm events were chosen to evaluate the performance of the two models: the first one was chosen to calibrate the models, and the other two to validate them. Both models performed a satisfactory hydrological response both in calibration and validation events. While for the TOPKAPI model it wasn't a real calibration, due to its really good performance with parameters modal values derived of watershed characteristics, for the TETIS model it has been necessary to perform a previous automatic calibration. This calibration was carried out using the data provided by the observed hydrograph, in order to adjust the modeĺs 9 correction factors. Keywords: TETIS, TOPKAPI, distributed models, hydrological response, ungauged basins.

  13. Code Syntax-Comparison Algorithm Based on Type-Redefinition-Preprocessing and Rehash Classification

    Directory of Open Access Journals (Sweden)

    Baojiang Cui

    2011-08-01

    Full Text Available The code comparison technology plays an important role in the fields of software security protection and plagiarism detection. Nowadays, there are mainly FIVE approaches of plagiarism detection, file-attribute-based, text-based, token-based, syntax-based and semantic-based. The prior three approaches have their own limitations, while the technique based on syntax has its shortage of detection ability and low efficiency that all of these approaches cannot meet the requirements on large-scale software plagiarism detection. Based on our prior research, we propose an algorithm on type redefinition plagiarism detection, which could detect the level of simple type redefinition, repeating pattern redefinition, and the redefinition of type with pointer. Besides, this paper also proposes a code syntax-comparison algorithm based on rehash classification, which enhances the node storage structure of the syntax tree, and greatly improves the efficiency.

  14. Maximum likelihood for genome phylogeny on gene content.

    Science.gov (United States)

    Zhang, Hongmei; Gu, Xun

    2004-01-01

    With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.

  15. Joint analysis of prevalence and incidence data using conditional likelihood.

    Science.gov (United States)

    Saarela, Olli; Kulathinal, Sangita; Karvanen, Juha

    2009-07-01

    Disease prevalence is the combined result of duration, disease incidence, case fatality, and other mortality. If information is available on all these factors, and on fixed covariates such as genotypes, prevalence information can be utilized in the estimation of the effects of the covariates on disease incidence. Study cohorts that are recruited as cross-sectional samples and subsequently followed up for disease events of interest produce both prevalence and incidence information. In this paper, we make use of both types of information using a likelihood, which is conditioned on survival until the cross section. In a simulation study making use of real cohort data, we compare the proposed conditional likelihood method to a standard analysis where prevalent cases are omitted and the likelihood expression is conditioned on healthy status at the cross section.

  16. Penalized maximum likelihood estimation and variable selection in geostatistics

    CERN Document Server

    Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919

    2012-01-01

    We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...

  17. Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs

    CERN Document Server

    Desjardins, Guillaume; Bengio, Yoshua

    2010-01-01

    Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...

  18. Penalized maximum likelihood estimation for generalized linear point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....

  19. How to Maximize the Likelihood Function for a DSGE Model

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003......). Following these extensions, we examine the ability of the two routines to maximize the likelihood function for a sequence of test economies. Our results show that the CMA- ES routine clearly outperforms Simulated Annealing in its ability to find the global optimum and in efficiency. With 10 unknown...... structural parameters in the likelihood function, the CMA-ES routine finds the global optimum in 95% of our test economies compared to 89% for Simulated Annealing. When the number of unknown structural parameters in the likelihood function increases to 20 and 35, then the CMA-ES routine finds the global...

  20. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available the Fisher linear discriminant and the iris region of interest is extracted. Four texture description methods are compared for segmenting iris texture using a region based pattern classification approach: Grey Level Co-occurrence Matrix (GLCM), Discrete...