How to Maximize the Likelihood Function for a DSGE Model
Andreasen, Martin Møller
This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003......). Following these extensions, we examine the ability of the two routines to maximize the likelihood function for a sequence of test economies. Our results show that the CMA- ES routine clearly outperforms Simulated Annealing in its ability to find the global optimum and in efficiency. With 10 unknown...... structural parameters in the likelihood function, the CMA-ES routine finds the global optimum in 95% of our test economies compared to 89% for Simulated Annealing. When the number of unknown structural parameters in the likelihood function increases to 20 and 35, then the CMA-ES routine finds the global...
Maximizing Friend-Making Likelihood for Social Activity Organization
2015-05-22
the interplay of the group size, the constraint on existing friendships and the objective function on the likelihood of friend making. We prove that...social networks (OSNs), e.g., Facebook , Meetup, and Skout1, more and more people initiate friend gatherings or group activities via these OSNs. For...example, more than 16 millions of events are created on Facebook each month to organize various kinds of activities2, and more than 500 thousands of face
Maximum Likelihood Learning of Conditional MTE Distributions
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE speciﬁcations and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables....... Finally, experimental results demonstrate the applicability of the learning procedure as well as the expressive power of the conditional MTE distribution....
Conditional likelihood inference in generalized linear mixed models.
Sartori, Nicola; Severini , T.A
2002-01-01
Consider a generalized linear model with a canonical link function, containing both fixed and random effects. In this paper, we consider inference about the fixed effects based on a conditional likelihood function. It is shown that this conditional likelihood function is valid for any distribution of the random effects and, hence, the resulting inferences about the fixed effects are insensitive to misspecification of the random effects distribution. Inferences based on the conditional likelih...
Joint analysis of prevalence and incidence data using conditional likelihood.
Saarela, Olli; Kulathinal, Sangita; Karvanen, Juha
2009-07-01
Disease prevalence is the combined result of duration, disease incidence, case fatality, and other mortality. If information is available on all these factors, and on fixed covariates such as genotypes, prevalence information can be utilized in the estimation of the effects of the covariates on disease incidence. Study cohorts that are recruited as cross-sectional samples and subsequently followed up for disease events of interest produce both prevalence and incidence information. In this paper, we make use of both types of information using a likelihood, which is conditioned on survival until the cross section. In a simulation study making use of real cohort data, we compare the proposed conditional likelihood method to a standard analysis where prevalent cases are omitted and the likelihood expression is conditioned on healthy status at the cross section.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence.
Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood
Yunquan Song
2014-01-01
Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.
Association studies with imputed variants using expectation-maximization likelihood-ratio tests.
Kuan-Chieh Huang
Full Text Available Genotype imputation has become standard practice in modern genetic studies. As sequencing-based reference panels continue to grow, increasingly more markers are being well or better imputed but at the same time, even more markers with relatively low minor allele frequency are being imputed with low imputation quality. Here, we propose new methods that incorporate imputation uncertainty for downstream association analysis, with improved power and/or computational efficiency. We consider two scenarios: I when posterior probabilities of all potential genotypes are estimated; and II when only the one-dimensional summary statistic, imputed dosage, is available. For scenario I, we have developed an expectation-maximization likelihood-ratio test for association based on posterior probabilities. When only imputed dosages are available (scenario II, we first sample the genotype probabilities from its posterior distribution given the dosages, and then apply the EM-LRT on the sampled probabilities. Our simulations show that type I error of the proposed EM-LRT methods under both scenarios are protected. Compared with existing methods, EM-LRT-Prob (for scenario I offers optimal statistical power across a wide spectrum of MAF and imputation quality. EM-LRT-Dose (for scenario II achieves a similar level of statistical power as EM-LRT-Prob and, outperforms the standard Dosage method, especially for markers with relatively low MAF or imputation quality. Applications to two real data sets, the Cebu Longitudinal Health and Nutrition Survey study and the Women's Health Initiative Study, provide further support to the validity and efficiency of our proposed methods.
Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood
Olli Saarela
2012-01-01
Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.
Giordan, M.; Vaggi, F.; Wehrens, R.
2017-01-01
The Levenberg–Marquardt algorithm is a flexible iterative procedure used to solve non-linear least-squares problems. In this work, we study how a class of possible adaptations of this procedure can be used to solve maximum-likelihood problems when the underlying distributions are in the exponential
The VGAM Package for Capture-Recapture Data Using the Conditional Likelihood
Thomas W. Yee
2015-06-01
Full Text Available It is well known that using individual covariate information (such as body weight or gender to model heterogeneity in capture-recapture (CR experiments can greatly enhance inferences on the size of a closed population. Since individual covariates are only observable for captured individuals, complex conditional likelihood methods are usually required and these do not constitute a standard generalized linear model (GLM family. Modern statistical techniques such as generalized additive models (GAMs, which allow a relaxing of the linearity assumptions on the covariates, are readily available for many standard GLM families. Fortunately, a natural statistical framework for maximizing conditional likelihoods is available in the Vector GLM and Vector GAM classes of models. We present several new R functions (implemented within the VGAM package specifically developed to allow the incorporation of individual covariates in the analysis of closed population CR data using a GLM/GAM-like approach and the conditional likelihood. As a result, a wide variety of practical tools are now readily available in the VGAM object oriented framework. We discuss and demonstrate their advantages, features and flexibility using the new VGAM CR functions on several examples.
Lemaire, H.; Barat, E.; Carrel, F.; Dautremer, T.; Dubos, S.; Limousin, O.; Montagu, T.; Normand, S.; Schoepff, V. [CEA, Gif-sur-Yvette, F-91191 (France); Amgarou, K.; Menaa, N. [CANBERRA, 1, rue des Herons, Saint Quentin en Yvelines, F-78182 (France); Angelique, J.-C. [LPC, 6, boulevard du Marechal Juin, F-14050 (France); Patoz, A. [CANBERRA, 10, route de Vauzelles, Loches, F-37600 (France)
2015-07-01
In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)
Defining Conditions for Maximizing Bioreduction of Uranium
David C. White; Aaron D. Peacock; Yun-Juan Chang; Roland Geyer; Philip E. Long; Jonathan D. Istok; Amanda N.; R. Todd Anderson; Dora Ogles
2004-03-17
Correlations between modifying electron donor and acceptor accessibility, the in-situ microbial community, and bioreduction of Uranium at the FRC and UMTRA research sites indicated that significant modifications in the rate, amount and by inference the potential stability of immobilized Uranium are feasible in these environments. The in-situ microbial community at these sites was assessed with a combination of lipid and real-time molecular techniques providing quantitative insights of effects of electron donor and manipulations. Increased (9mM in 2003 vs 3mM 2002) donor amendment at the Old Rifle site resulted in the stimulation of anaerobic conditions downgradient of the injection gallery. Biomass within the test plot increased relative to the control well at 17 feet. Q-PCR specific for IRB/SRB showed increased copy numbers within the test plot and was the highest at the injection gallery. Q-PCR specific for Geobacter sp. showed increased copy numbers within the test plot but further downgradient from the injection gallery than the SRB/IRB. DNA and Lipid analysis confirm changes in the microbial community structure due to donor addition. See also the PNNL (Long) and UMASS (Anderson) posters for more information about this site.
Groups Satisfying the Maximal Condition on Non-modular Subgroups
Maria De Falco; Carmela Musella
2005-01-01
In this paper, (generalized) soluble groups for which the set of non-modular subgroups verifies the maximal condition and groups for which the set of non-permutable subgroups satisfies the same property are classified.
Conditional Likelihood Estimators for Hidden Markov Models and Stochastic Volatility Models
Genon-Catalot, Valentine; Jeantheau, Thierry; Laredo, Catherine
2003-01-01
ABSTRACT. This paper develops a new contrast process for parametric inference of general hidden Markov models, when the hidden chain has a non-compact state space. This contrast is based on the conditional likelihood approach, often used for ARCH-type models. We prove the strong consistency of the conditional likelihood estimators under appropriate conditions. The method is applied to the Kalman filter (for which this contrast and the exact likelihood lead to asymptotically equivalent estimat...
Conditional likelihood inference in a case- cohort design: an application to haplotype analysis.
Saarela, Olli; Kulathinal, Sangita
2007-01-01
Under the setting of a case-cohort design, covariate values are ascertained for a smaller subgroup of the original study cohort which typically is a representative sample from a population. Individuals with a specific event outcome are selected to the second stage study group as cases and an additional subsample is selected to act as a control group. We carry out analysis of such a design using conditional likelihood where the likelihood expression is conditioned on the ascertainment to the second stage study group. Such likelihood expression involves the probability of ascertainment which need to be expressed in terms of the model parameters. We present examples of conditional likelihoods for models for categorical response and time-to-event response. We show that the conditional likelihood inference leads to valid estimation of population parameters. Our application considers joint estimation of haplotype-event association parameters and population haplotype frequencies based on SNP genotype data collected under a case-cohort design.
Closed form maximum likelihood estimator of conditional random fields
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas
2013-01-01
Training Conditional Random Fields (CRFs) can be very slow for big data. In this paper, we present a new training method for CRFs called {\\em Empirical Training} which is motivated by the concept of co-occurrence rate. We show that the standard training (unregularized) can have many maximum likeliho
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Second order pseudo-maximum likelihood estimation and conditional variance misspecification
Lejeune, Bernard
1997-01-01
In this paper, we study the behavior of second order pseudo-maximum likelihood estimators under conditional variance misspecification. We determine sufficient and essentially necessary conditions for such a estimator to be, regardless of the conditional variance (mis)specification, consistent for the mean parameters when the conditional mean is correctly specified. These conditions implie that, even if mean and variance parameters vary independently, standard PML2 estimators are generally not...
Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions
Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper
2016-01-01
We propose a novel Power Spectral Density (PSD) estimator for multi-microphone systems operating in reverberant and noisy conditions. The estimator is derived using the maximum likelihood approach and is based on a blocked and pre-whitened additive signal model. The intended application......, the difference between algorithms was found to be statistically significant only in some of the experimental conditions....
On the Loss of Information in Conditional Maximum Likelihood Estimation of Item Parameters.
Eggen, Theo J. H. M.
2000-01-01
Shows that the concept of F-information, a generalization of Fisher information, is a useful took for evaluating the loss of information in conditional maximum likelihood (CML) estimation. With the F-information concept it is possible to investigate the conditions under which there is no loss of information in CML estimation and to quantify a loss…
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
Valleriani, Angelo
2016-01-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to end anywhere. Based on the Moran model of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation ...
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
Valleriani, Angelo
2016-08-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.
A conditional likelihood is required to estimate the selection coefficient in ancient DNA.
Valleriani, Angelo
2016-08-16
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.
Chen, Jinbo; Rodriguez, Carmen
2007-12-01
Genetic epidemiologists routinely assess disease susceptibility in relation to haplotypes, that is, combinations of alleles on a single chromosome. We study statistical methods for inferring haplotype-related disease risk using single nucleotide polymorphism (SNP) genotype data from matched case-control studies, where controls are individually matched to cases on some selected factors. Assuming a logistic regression model for haplotype-disease association, we propose two conditional likelihood approaches that address the issue that haplotypes cannot be inferred with certainty from SNP genotype data (phase ambiguity). One approach is based on the likelihood of disease status conditioned on the total number of cases, genotypes, and other covariates within each matching stratum, and the other is based on the joint likelihood of disease status and genotypes conditioned only on the total number of cases and other covariates. The joint-likelihood approach is generally more efficient, particularly for assessing haplotype-environment interactions. Simulation studies demonstrated that the first approach was more robust to model assumptions on the diplotype distribution conditioned on environmental risk variables and matching factors in the control population. We applied the two methods to analyze a matched case-control study of prostate cancer.
Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi
2012-12-20
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.
Chen, Jun; Xie, Jichun; Li, Hongzhe
2011-03-01
Gene co-expressions have been widely used in the analysis of microarray gene expression data. However, the co-expression patterns between two genes can be mediated by cellular states, as reflected by expression of other genes, single nucleotide polymorphisms, and activity of protein kinases. In this article, we introduce a bivariate conditional normal model for identifying the variables that can mediate the co-expression patterns between two genes. Based on this model, we introduce a likelihood ratio (LR) test and a penalized likelihood procedure for identifying the mediators that affect gene co-expression patterns. We propose an efficient computational algorithm based on iterative reweighted least squares and cyclic coordinate descent and have shown that when the tuning parameter in the penalized likelihood is appropriately selected, such a procedure has the oracle property in selecting the variables. We present simulation results to compare with existing methods and show that the LR-based approach can perform similarly or better than the existing method of liquid association and the penalized likelihood procedure can be quite effective in selecting the mediators. We apply the proposed method to yeast gene expression data in order to identify the kinases or single nucleotide polymorphisms that mediate the co-expression patterns between genes.
Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D
2016-01-01
Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.
Baudry, Jean-Patrick
2012-01-01
The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.
Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi
2016-10-07
Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide.
Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi
2016-10-01
Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide.
Lloyd, Chris J; Moldovan, Max V
2007-12-10
We compare various one-sided confidence limits for the odds ratio in a 2 x 2 table. The first group of limits relies on first-order asymptotic approximations and includes limits based on the (signed) likelihood ratio, score and Wald statistics. The second group of limits is based on the conditional tilted hypergeometric distribution, with and without mid-P correction. All these limits have poor unconditional coverage properties and so we apply the general transformation of Buehler (J. Am. Statist. Assoc. 1957; 52:482-493) to obtain limits which are unconditionally exact. The performance of these competing exact limits is assessed across a range of sample sizes and parameter values by looking at their mean size. The results indicate that Buehler limits generated from the conditional likelihood have the best performance, with a slight preference for the mid-P version. This confidence limit has not been proposed before and is recommended for general use, especially when the underlying probabilities are not extreme.
Conditional maximum likelihood estimation in semiparametric transformation model with LTRC data.
Chen, Chyong-Mei; Shen, Pao-Sheng
2017-02-06
Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.
Regression analysis based on conditional likelihood approach under semi-competing risks data.
Hsieh, Jin-Jian; Huang, Yu-Ting
2012-07-01
Medical studies often involve semi-competing risks data, which consist of two types of events, namely terminal event and non-terminal event. Because the non-terminal event may be dependently censored by the terminal event, it is not possible to make inference on the non-terminal event without extra assumptions. Therefore, this study assumes that the dependence structure on the non-terminal event and the terminal event follows a copula model, and lets the marginal regression models of the non-terminal event and the terminal event both follow time-varying effect models. This study uses a conditional likelihood approach to estimate the time-varying coefficient of the non-terminal event, and proves the large sample properties of the proposed estimator. Simulation studies show that the proposed estimator performs well. This study also uses the proposed method to analyze AIDS Clinical Trial Group (ACTG 320).
Haberman, Shelby J.
2004-01-01
The usefulness of joint and conditional maximum-likelihood is considered for the Rasch model under realistic testing conditions in which the number of examinees is very large and the number is items is relatively large. Conditions for consistency and asymptotic normality are explored, effects of model error are investigated, measures of prediction…
The null distribution of likelihood-ratio statistics in the conditional-logistic linkage model.
Song, Yeunjoo E; Elston, Robert C
2013-01-01
Olson's conditional-logistic model retains the nice property of the LOD score formulation and has advantages over other methods that make it an appropriate choice for complex trait linkage mapping. However, the asymptotic distribution of the conditional-logistic likelihood-ratio (CL-LR) statistic with genetic constraints on the model parameters is unknown for some analysis models, even in the case of samples comprising only independent sib pairs. We derive approximations to the asymptotic null distributions of the CL-LR statistics and compare them with the empirical null distributions by simulation using independent affected sib pairs. Generally, the empirical null distributions of the CL-LR statistics match well the known or approximated asymptotic distributions for all analysis models considered except for the covariate model with a minimum-adjusted binary covariate. This work will provide useful guidelines for linkage analysis of real data sets for the genetic analysis of complex traits, thereby contributing to the identification of genes for disease traits.
Gebregziabher, Mulugeta; Guimaraes, Paulo; Cozen, Wendy; Conti, David V
2010-04-30
In genetic association studies it is becoming increasingly imperative to have large sample sizes to identify and replicate genetic effects. To achieve these sample sizes, many research initiatives are encouraging the collaboration and combination of several existing matched and unmatched case-control studies. Thus, it is becoming more common to compare multiple sets of controls with the same case group or multiple case groups to validate or confirm a positive or negative finding. Usually, a naive approach of fitting separate models for each case-control comparison is used to make inference about disease-exposure association. But, this approach does not make use of all the observed data and hence could lead to inconsistent results. The problem is compounded when a common case group is used in each case-control comparison. An alternative to fitting separate models is to use a polytomous logistic model but, this model does not combine matched and unmatched case-control data. Thus, we propose a polytomous logistic regression approach based on a latent group indicator and a conditional likelihood to do a combined analysis of matched and unmatched case-control data. We use simulation studies to evaluate the performance of the proposed method and a case-control study of multiple myeloma and Inter-Leukin-6 as an example. Our results indicate that the proposed method leads to a more efficient homogeneity test and a pooled estimate with smaller standard error.
Andersen, Erling B.
A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…
Maris, E.
1998-01-01
The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Li, Z; Gastwirth, J L; Gail, M H
2005-05-01
Both population based and family based case control studies are used to test whether particular genotypes are associated with disease. While population based studies have more power, cryptic population stratification can produce false-positive results. Family-based methods have been introduced to control for this problem. This paper presents the full likelihood function for family-based association studies for nuclear families ascertained on the basis of their number of affected and unaffected children. The likelihood of a family factors into the probability of parental mating type, conditional on offspring phenotypes, times the probability of offspring genotypes given their phenotypes and the parental mating type. The first factor can be influenced by population stratification, whereas the latter factor, called the conditional likelihood, is not. The conditional likelihood is used to obtain score tests with proper size in the presence of population stratification (see also Clayton (1999) and Whittemore & Tu (2000)). Under either the additive or multiplicative model, the TDT is known to be the optimal score test when the family has only one affected child. Thus, the class of score tests explored can be considered as a general family of TDT-like procedures. The relative informativeness of the various mating types is assessed using the Fisher information, which depends on the number of affected and unaffected offspring and the penetrances. When the additive model is true, families with parental mating type Aa x Aa are most informative. Under the dominant (recessive) model, however, a family with mating type Aa x aa(AA x Aa) is more informative than a family with doubly heterozygous (Aa x Aa) parents. Because we derive explicit formulae for all components of the likelihood, we are able to present tables giving required sample sizes for dominant, additive and recessive inheritance models.
Cheng, K F
2006-09-30
Given the biomedical interest in gene-environment interactions along with the difficulties inherent in gathering genetic data from controls, epidemiologists need methodologies that can increase precision of estimating interactions while minimizing the genotyping of controls. To achieve this purpose, many epidemiologists suggested that one can use case-only design. In this paper, we present a maximum likelihood method for making inference about gene-environment interactions using case-only data. The probability of disease development is described by a logistic risk model. Thus the interactions are model parameters measuring the departure of joint effects of exposure and genotype from multiplicative odds ratios. We extend the typical inference method derived under the assumption of independence between genotype and exposure to that under a more general assumption of conditional independence. Our maximum likelihood method can be applied to analyse both categorical and continuous environmental factors, and generalized to make inference about gene-gene-environment interactions. Moreover, the application of this method can be reduced to simply fitting a multinomial logistic model when we have case-only data. As a consequence, the maximum likelihood estimates of interactions and likelihood ratio tests for hypotheses concerning interactions can be easily computed. The methodology is illustrated through an example based on a study about the joint effects of XRCC1 polymorphisms and smoking on bladder cancer. We also give two simulation studies to show that the proposed method is reliable in finite sample situation.
Larribe, Fabrice; Lessard, Sabin
2008-01-01
A composite-conditional-likelihood (CCL) approach is proposed to map the position of a trait-influencing mutation (TIM) using the ancestral recombination graph (ARG) and importance sampling to reconstruct the genealogy of DNA sequences with respect to windows of marker loci and predict the linkage disequilibrium pattern observed in a sample of cases and controls. The method is designed to fine-map the location of a disease mutation, not as an association study. The CCL function proposed for the position of the TIM is a weighted product of conditional likelihood functions for windows of a given number of marker loci that encompass the TIM locus, given the sample configuration at the marker loci in those windows. A rare recessive allele is assumed for the TIM and single nucleotide polymorphisms (SNPs) are considered as markers. The method is applied to a range of simulated data sets. Not only do the CCL profiles converge more rapidly with smaller window sizes as the number of simulated histories of the sampled sequences increases, but the maximum-likelihood estimates for the position of the TIM remain as satisfactory, while requiring significantly less computing time. The simulations also suggest that non-random samples, more precisely, a non-proportional number of controls versus the number of cases, has little effect on the estimation procedure as well as sample size and marker density beyond some threshold values. Moreover, when compared with some other recent methods under the same assumptions, the CCL approach proves to be competitive.
Baturin, Pavlo
2015-03-01
Material decomposition in absorption-based X-ray CT imaging suffers certain inefficiencies when differentiating among soft tissue materials. To address this problem, decomposition techniques turn to spectral CT, which has gained popularity over the last few years. Although proven to be more effective, such techniques are primarily limited to the identification of contrast agents and soft and bone-like materials. In this work, we introduce a novel conditional likelihood, material-decomposition method capable of identifying any type of material objects scanned by spectral CT. The method takes advantage of the statistical independence of spectral data to assign likelihood values to each of the materials on a pixel-by-pixel basis. It results in likelihood images for each material, which can be further processed by setting certain conditions or thresholds, to yield a final material-diagnostic image. The method can also utilize phase-contrast CT (PCI) data, where measured absorption and phase-shift information can be treated as statistically independent datasets. In this method, the following cases were simulated: (i) single-scan PCI CT, (ii) spectral PCI CT, (iii) absorption-based spectral CT, and (iv) single-scan PCI CT with an added tumor mass. All cases were analyzed using a digital breast phantom; although, any other objects or materials could be used instead. As a result, all materials were identified, as expected, according to their assignment in the digital phantom. Materials with similar attenuation or phase-shift values (e.g., glandular tissue, skin, and tumor masses) were especially successfully when differentiated by the likelihood approach.
Ruka, Dianne R; Simon, George P; Dean, Katherine M
2012-06-20
An extensive matrix of different growth conditions including media, incubation time, inoculum volume, surface area and media volume were investigated in order to maximize the yield of bacterial cellulose produced by Gluconacetobacter xylinus, which will be used as reinforcement material to produce fully biodegradable composites. Crystallinity was shown to be controllable depending on the media and conditions employed. Samples with significant difference in crystallinity in a range from 50% to 95% were produced. Through experimental design, the yield of cellulose was maximized; primarily this involved reactor surface area design, optimized media and the use of mannitol being the highest cellulose-producing carbon source. Increasing the volume of the media did achieve a higher cellulose yield, however this increase was not found to be cost or time effective.
Sargsyan, Ori
2010-08-01
The general coalescent tree framework is a family of models for determining ancestries among random samples of DNA sequences at a nonrecombining locus. The ancestral models included in this framework can be derived under various evolutionary scenarios. Here, a computationally tractable full-likelihood-based inference method for neutral polymorphisms is presented, using the general coalescent tree framework and the infinite-sites model for mutations in DNA sequences. First, an exact sampling scheme is developed to determine the topologies of conditional ancestral trees. However, this scheme has some computational limitations and to overcome these limitations a second scheme based on importance sampling is provided. Next, these schemes are combined with Monte Carlo integrations to estimate the likelihood of full polymorphism data, the ages of mutations in the sample, and the time of the most recent common ancestor. In addition, this article shows how to apply this method for estimating the likelihood of neutral polymorphism data in a sample of DNA sequences completely linked to a mutant allele of interest. This method is illustrated using the data in a sample of DNA sequences at the APOE gene locus.
Chen, Ying-Xue; Zhang, Yan; Chen, Guang-Hao
2003-05-01
This study focused on the appropriate catalyst preparation and operating conditions for maximizing catalytic reduction efficiency of nitrate into nitrogen gas from groundwater. Batch experiments were conducted with prepared Pd and/or Cu catalysts with hydrogen gas supplied under specific operating conditions. It has been found that Pd-Cu combined catalysts prepared at a mass ratio of 4:1 can maximize the nitrate reduction into nitrogen gas. With an increase in the quantity of the catalysts, both nitrite intermediates and ammonia can be kept at a low level. It has also been found that the catalytic activity is mainly affected by the mass ratio of hydrogen gas to nitrate nitrogen, and hydrogen gas gauge pressure. Appropriate operating values of H(2)/NO(3)-N ratio, hydrogen gas gauge pressure, pH, and initial nitrate concentration have been determined to be 44.6g H(2)/g N, 0.15 atm, 5.2 (-), 100 mg x L(-1) for maximizing the catalytic reduction of nitrate from groundwater.
General conditions for maximal violation of non-contextuality in discrete and continuous variables
Laversanne-Finot, A.; Ketterer, A.; Barros, M. R.; Walborn, S. P.; Coudreau, T.; Keller, A.; Milman, P.
2017-04-01
The contextuality of quantum mechanics can be shown by the violation of inequalities based on measurements of well chosen observables. An important property of such observables is that their expectation value can be expressed in terms of probabilities for obtaining two exclusive outcomes. Examples of such inequalities have been constructed using either observables with a dichotomic spectrum or using periodic functions obtained from displacement operators in phase space. Here we identify the general conditions on the spectral decomposition of observables demonstrating state independent contextuality of quantum mechanics. Our results not only unify existing strategies for maximal violation of state independent non-contextuality inequalities but also lead to new scenarios enabling such violations. Among the consequences of our results is the impossibility of having a state independent maximal violation of non-contextuality in the Peres–Mermin scenario with discrete observables of odd dimensions.
王璐; 李光春; 乔相伟; 王兆龙; 马涛
2012-01-01
In order to solve the state estimation problem of nonlinear systems without knowing prior noise statistical characteristics, an adaptive unscented Kalman filter (UKF) based on the maximum likelihood principle and expectation maximization algorithm is proposed in this paper. In our algorithm, the maximum likelihood principle is used to find a log likelihood function with noise statistical characteristics. Then, the problem of noise estimation turns out to be maximizing the mean of the log likelihood function, which can be achieved by using the expectation maximization algorithm. Finally, the adaptive UKF algorithm with a suboptimal and recurred noise statistical estimator can be obtained. The simulation analysis shows that the proposed adaptive UKF algorithm can overcome the problem of filtering accuracy declination of traditional UKF used in nonlinear filtering without knowing prior noise statistical characteristics and that the algorithm can estimate the noise statistical parameters online.%针对噪声先验统计特性未知情况下的非线性系统状态估计问题,提出了基于极大似然准则和最大期望算法的自适应无迹卡尔曼滤波(Unscented Kalman filter,UKF)算法.利用极大似然准则构造含有噪声统计特性的对数似然函数,通过最大期望算法将噪声估计问题转化为对数似然函数数学期望极大化问题,最终得到带次优递推噪声统计估计器的自适应UKF算法.仿真分析表明,与传统UKF算法相比,提出的自适应UKF算法有效克服了传统UKF算法在系统噪声统计特性未知情况下滤波精度下降的问题,并实现了系统噪声统计特性的在线估计.
Lui, Kung-Jong
2015-07-15
A random effects logistic regression model is proposed for an incomplete block crossover trial comparing three treatments when the underlying patient response is dichotomous. On the basis of the conditional distributions, the conditional maximum likelihood estimator for the relative effect between treatments and its estimated asymptotic standard error are derived. Asymptotic interval estimator and exact interval estimator are also developed. Monte Carlo simulation is used to evaluate the performance of these estimators. Both asymptotic and exact interval estimators are found to perform well in a variety of situations. When the number of patients is small, the exact interval estimator with assuring the coverage probability larger than or equal to the desired confidence level can be especially of use. The data taken from a crossover trial comparing the low and high doses of an analgesic with a placebo for the relief of pain in primary dysmenorrhea are used to illustrate the use of estimators and the potential usefulness of the incomplete block crossover design.
Analysis of multiple exposures in the case-crossover design via sparse conditional likelihood.
Avalos, Marta; Grandvalet, Yves; Adroher, Nuria Duran; Orriols, Ludivine; Lagarde, Emmanuel
2012-09-20
We adapt the least absolute shrinkage and selection operator (lasso) and other sparse methods (elastic net and bootstrapped versions of lasso) to the conditional logistic regression model and provide a full R implementation. These variable selection procedures are applied in the context of case-crossover studies. We study the performances of conventional and sparse modelling strategies by simulations, then empirically compare results of these methods on the analysis of the association between exposure to medicinal drugs and the risk of causing an injurious road traffic crash in elderly drivers. Controlling the false discovery rate of lasso-type methods is still problematic, but this problem is also present in conventional methods. The sparse methods have the ability to provide a global analysis of dependencies, and we conclude that some of the variants compared here are valuable tools in the context of case-crossover studies with a large number of variables.
Roy Choudhury, Kingshuk; O'Sullivan, Finbarr; Kasman, Ian; Plowman, Greg D
2012-12-20
Measurements in tumor growth experiments are stopped once the tumor volume exceeds a preset threshold: a mechanism we term volume endpoint censoring. We argue that this type of censoring is informative. Further, least squares (LS) parameter estimates are shown to suffer a bias in a general parametric model for tumor growth with an independent and identically distributed measurement error, both theoretically and in simulation experiments. In a linear growth model, the magnitude of bias in the LS growth rate estimate increases with the growth rate and the standard deviation of measurement error. We propose a conditional maximum likelihood estimation procedure, which is shown both theoretically and in simulation experiments to yield approximately unbiased parameter estimates in linear and quadratic growth models. Both LS and maximum likelihood estimators have similar variance characteristics. In simulation studies, these properties appear to extend to the case of moderately dependent measurement error. The methodology is illustrated by application to a tumor growth study for an ovarian cancer cell line.
Correspondenceless 3D-2D registration based on expectation conditional maximization
Kang, X.; Taylor, R. H.; Armand, M.; Otake, Y.; Yau, W. P.; Cheung, P. Y. S.; Hu, Y.
2011-03-01
3D-2D registration is a fundamental task in image guided interventions. Due to the physics of the X-ray imaging, however, traditional point based methods meet new challenges, where the local point features are indistinguishable, creating difficulties in establishing correspondence between 2D image feature points and 3D model points. In this paper, we propose a novel method to accomplish 3D-2D registration without known correspondences. Given a set of 3D and 2D unmatched points, this is achieved by introducing correspondence probabilities that we model as a mixture model. By casting it into the expectation conditional maximization framework, without establishing one-to-one point correspondences, we can iteratively refine the registration parameters. The method has been tested on 100 real X-ray images. The experiments showed that the proposed method accurately estimated the rotations (< 1°) and in-plane (X-Y plane) translations (< 1 mm).
SUN Churen
2005-01-01
It is difficult to judge whether a given point is a global maximizer of an unconstrained optimization problem. This paper deals with this problem by considering globa linformation via integral and gives a necessary and sufficient condition judging whether a given point is a global maximizer of an unconstrained optimization problem. An algorithm is offered under such a condition and finally two test problems are verified via the offered algorithm.
Rising Above Chaotic Likelihoods
Du, Hailiang
2014-01-01
Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Owen, Art B
2001-01-01
Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...
A quantum framework for likelihood ratios
Bond, Rachael L; Ormerod, Thomas C
2015-01-01
The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...
Likelihood approaches for proportional likelihood ratio model with right-censored data.
Zhu, Hong
2014-06-30
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.
Conditional maximum likelihood identification under missing data%丢失数据下的条件极大似然辨识
王建宏
2014-01-01
针对仿射结构形式在丢失数据下的条件极大似然辨识问题，首先引入交换矩阵将原随机矢量分解成观测和丢失部分；然后确定出观测数据在丢失数据下的条件均值和条件方差，以此建立条件似然函数；进而从理论上给出了条件极大似然函数关于未知参数矢量、未知白噪声方差值和丢失数据的求导公式，并从工程上给出一种可分离的优化算法；最后通过仿真算例验证了该辨识方法的有效性。%To the conditional maximum likelihood identification problem of an affine structure under missing data, a permutation matrix is used to divide a random vector into observed and missing parts. Then conditional mean and covariance under missing data are set up to obtain a conditional likelihood function. In the theory, expressions of the derivatives about the conditional maximum likelihood function on the unknown parameter vector, unknown white noise variance and missing data are derived. A separable optimum algorithm is given to be applied in engineering. Finally, simulation results show the effectiveness of the identification method.
Yasuhiro Tsubo
Full Text Available The brain is considered to use a relatively small amount of energy for its efficient information processing. Under a severe restriction on the energy consumption, the maximization of mutual information (MMI, which is adequate for designing artificial processing machines, may not suit for the brain. The MMI attempts to send information as accurate as possible and this usually requires a sufficient energy supply for establishing clearly discretized communication bands. Here, we derive an alternative hypothesis for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. Our hypothesis states that in vivo cortical neurons maximize the entropy of neuronal firing under two constraints, one limiting the energy consumption (as assumed previously and one restricting the uncertainty in output spike sequences at given firing rate. Thus, the conditional maximization of firing-rate entropy (CMFE solves a tradeoff between the energy cost and noise in neuronal response. In short, the CMFE sends a rich variety of information through broader communication bands (i.e., widely distributed firing rates at the cost of accuracy. We demonstrate that the CMFE is reflected in the long-tailed, typically power law, distributions of inter-spike intervals obtained for the majority of recorded neurons. In other words, the power-law tails are more consistent with the CMFE rather than the MMI. Thus, we propose the mathematical principle by which cortical neurons may represent information about synaptic input into their output spike trains.
Fearn, T; Hill, D C; Darby, S C
2008-05-30
In epidemiology, one approach to investigating the dependence of disease risk on an explanatory variable in the presence of several confounding variables is by fitting a binary regression using a conditional likelihood, thus eliminating the nuisance parameters. When the explanatory variable is measured with error, the estimated regression coefficient is biased usually towards zero. Motivated by the need to correct for this bias in analyses that combine data from a number of case-control studies of lung cancer risk associated with exposure to residential radon, two approaches are investigated. Both employ the conditional distribution of the true explanatory variable given the measured one. The method of regression calibration uses the expected value of the true given measured variable as the covariate. The second approach integrates the conditional likelihood numerically by sampling from the distribution of the true given measured explanatory variable. The two approaches give very similar point estimates and confidence intervals not only for the motivating example but also for an artificial data set with known properties. These results and some further simulations that demonstrate correct coverage for the confidence intervals suggest that for studies of residential radon and lung cancer the regression calibration approach will perform very well, so that nothing more sophisticated is needed to correct for measurement error.
Stewart, Iris T.; Ficklin, Darren L.; Carrillo, Carlos A.; McIntosh, Russell
2015-10-01
Extreme hydrologic conditions, such as floods, droughts, and elevated stream temperatures, significantly impact the societal fabric and ecosystems, and there is rising concern about increases in the frequency of extreme conditions with projected climate changes. Here we ask what changes in the occurrence of extreme hydrologic conditions can be expected by the end of the century for the important water-generating, mountainous basins of the Southwestern United States, namely the Sierra Nevada and Upper Colorado River Basins. The extreme conditions considered are very high flows, low flows, and elevated stream temperature as derived from historic and future simulations using the Soil and Water Assessment Tool (SWAT) hydrologic model and downscaled output from a General Circulation Model ensemble. Results indicate noteworthy differences in the frequency changes of extremes based on geographic region, season, elevation, and stream size. We found wide-spread increases in the occurrence of stream flows exceeding 150% of historic monthly averages for winter by the end of the century, and extensive increases in the occurrence of both extreme low flows (representing 3 °C of monthly averages) during the summer months, with some basins expecting extreme conditions 90-100% of the time by the end of the century. Understanding the differences in the changes of extreme conditions can identify climate-sensitive regions and assist in targeted planning for climate change adaptation and mitigation.
2014-01-01
Active magnetic bearings used on oil-free centrifugal refrigeration compressors have lower stiffness than conventional oil-lubricated journal or rolling element bearings. The lower stiffness of these bearings makes them sensitive to internal flow instabilities that are precursors of rotating stall or compressor surge. At operating conditions far away from surge the internal flow is very stable and the magnetic bearings keep the shaft centered, resulting in a minimal bearing orbit. The interna...
Optimal initial condition of passive tracers for their maximal mixing in finite time
Farazmand, Mohammad
2016-01-01
The efficiency of a fluid mixing device is often limited by fundamental laws and/or design constraints, such that a perfectly homogeneous mixture cannot be obtained in finite time. Here, we address the natural corollary question: Given the best available mixer, what is the optimal initial tracer pattern that leads to the most homogeneous mixture after a prescribed finite time? For ideal passive tracers, we show that this optimal initial condition coincides with the right singular vector (corresponding to the smallest singular value) of a suitably truncated Koopman operator. The truncation of the Koopman operator is made under the assumption that there is a small length-scale threshold $\\ell_\
Yönten, Vahap; Aktaş, Nahit
2014-01-01
Exploring optimum and cost-efficient medium composition for microbial growth of Candida intermedia Y-1981 yeast culture growing on whey was studied by applying a multistep response surface methodology. In the first step, Plackett-Burman (PB) design was utilized to determine the most significant fermentation medium factors on microbial growth. The medium temperature, sodium chloride and lactose concentrations were determined as the most important factors. Subsequently, the optimum combinations of the selected factors were explored by steepest ascent (SA) and central composite design (CCD). The optimum values for lactose and sodium chloride concentrations and medium temperature were found to be 18.4 g/L, 0.161 g/L, and 32.4°C, respectively. Experiments carried out at the optimum conditions revealed a maximum specific growth rate of 0.090 1/hr; 42% of total lactose removal was achieved in 24 h of fermentation time. The obtained results were finally verified with batch reactor experiments carried out under the optimum conditions evaluated.
Optimal initial condition of passive tracers for their maximal mixing in finite time
Farazmand, Mohammad
2017-05-01
The efficiency of fluid flow for mixing passive tracers is often limited by fundamental laws and/or design constraints, such that a perfectly homogeneous mixture cannot be obtained in finite time. Here we address the natural corollary question: Given a fluid flow, what is the optimal initial tracer pattern that leads to the most homogeneous mixture after a prescribed finite time? For ideal passive tracers, we show that this optimal initial condition coincides with the right singular vector (corresponding to the smallest singular value) of a suitably truncated Perron-Frobenius (PF) operator. The truncation of the PF operator is made under the assumption that there is a small length-scale threshold ℓν under which the tracer blobs are considered, for all practical purposes, completely mixed. We demonstrate our results on two examples: a prototypical model known as the sine flow and a direct numerical simulation of two-dimensional turbulence. Evaluating the optimal initial condition through this framework requires only the position of a dense grid of fluid particles at the final instance and their preimages at the initial instance of the prescribed time interval. As such, our framework can be readily applied to flows where such data are available through numerical simulations or experimental measurements.
Holdaway, Alex S.; Owens, Julie Sarno
2015-01-01
Using a within-subjects design and validated vignettes, this study examined the relative effects of four training and consultation conditions (i.e., consultation with key opinion leaders, consultation with observation and performance feedback, consultation with motivational interviewing, and professional development-as-usual) on teachers' (N =…
Rodriguez Carlos A
2005-06-01
Full Text Available Abstract Background Streptococcus pneumoniae, particularly penicillin-resistant strains (PRSP, constitute one of the most important causes of serious infections worldwide. It is a fastidious microorganism with exquisite nutritional and environmental requirements to grow, a characteristic that prevents the development of useful animal models to study the biology of the microorganism. This study was designed to determine optimal conditions for culture and growth of PRSP. Results We developed a simple and reproducible method for culture of diverse strains of PRSP representing several invasive serotypes of clinical and epidemiological importance in Colombia. Application of this 3-step culture protocol consistently produced more than 9 log10 CFU/ml of viable cells in the middle part of the logarithmic phase of their growth curve. Conclusion A controlled inoculum size grown in 3 successive steps in supplemented agar and broth under 5% CO2 atmosphere, with pH adjustment and specific incubation times, allowed production of great numbers of PRSP without untimely activation of autolysis mechanisms.
Poteat, V Paul; Calzo, Jerel P; Yoshikawa, Hirokazu
2016-07-01
Gay-Straight Alliances (GSAs) may promote wellbeing for sexual minority youth (e.g., lesbian, gay, bisexual, or questioning youth) and heterosexual youth. We considered this potential benefit of GSAs in the current study by examining whether three GSA functions-support/socializing, information/resource provision, and advocacy-contributed to sense of agency among GSA members while controlling for two major covariates, family support and the broader school LGBT climate. The sample included 295 youth in 33 Massachusetts GSAs (69 % LGBQ, 68 % cisgender female, 68 % white; M age = 16.06 years). Based on multilevel models, as hypothesized, youth who received more support/socializing, information/resources, and did more advocacy in their GSA reported greater agency. Support/socializing and advocacy distinctly contributed to agency even while accounting for the contribution of family support and positive LGBT school climate. Further, advocacy was associated with agency for sexual minority youth but not heterosexual youth. Greater organizational structure enhanced the association between support/socializing and agency; it also enhanced the association between advocacy and agency for sexual minority youth. These findings begin to provide empirical support for specific functions of GSAs that could promote wellbeing and suggest conditions under which their effects may be enhanced.
Chen, Jinglong; Wan, Zhiguo; Pan, Jun; Zi, Yanyang; Wang, Yu; Chen, Binqiang; Sun, Hailiang; Yuan, Jing; He, Zhengjia
2016-02-01
Fault identification timely of rolling mill drivetrain is significant for guaranteeing product quality and realizing long-term safe operation. So, condition monitoring system of rolling mill drivetrain is designed and developed. However, because compound fault and weak fault feature information is usually sub-merged in heavy background noise, this task still faces challenge. This paper provides a possibility for fault identification of rolling mills drivetrain by proposing customized maximal-overlap multiwavelet denoising method. The effectiveness of wavelet denoising method mainly relies on the appropriate selections of wavelet base, transform strategy and threshold rule. First, in order to realize exact matching and accurate detection of fault feature, customized multiwavelet basis function is constructed via symmetric lifting scheme and then vibration signal is processed by maximal-overlap multiwavelet transform. Next, based on spatial dependency of multiwavelet transform coefficients, spatial neighboring coefficient data-driven group threshold shrinkage strategy is developed for denoising process by choosing the optimal group length and threshold via the minimum of Stein's Unbiased Risk Estimate. The effectiveness of proposed method is first demonstrated through compound fault identification of reduction gearbox on rolling mill. Then it is applied for weak fault identification of dedusting fan bearing on rolling mill and the results support its feasibility.
Likelihood Analysis of Seasonal Cointegration
Johansen, Søren; Schaumburg, Ernst
1999-01-01
The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...
韩玉; 金应华; 吴武清
2013-01-01
利用经验似然方法对自回归条件久期(ACD)模型参数进行统计检验,给出了自回归条件久期模型参数的经验似然比统计量,并证明了该统计量渐近服从x2-分布.数值模拟结果表明,经验似然方法优于拟似然方法.%This paper solves the statistical test problem of an autoregressive conditional duration (ACD) models based on an empirical likelihood method. We construct the log empirical likelihood ratio statistics for the parameters of ACD model, it is showed that the proposed statistics asymptotically follows an χ2-distribution. A numerical simulation demonstrates that the performance of the empirical likelihood method are better than that of the quasi-likelihood method.
Kerner, Boris S.
2016-09-01
We have revealed general physical conditions for the maximization of the network throughput at which free flow conditions are ensured, i.e., traffic breakdown cannot occur in the whole traffic or transportation network. A physical measure of the network - network capacity is introduced that characterizes general features of the network with respect to the maximization of the network throughput. The network capacity allows us also to make a general proof of the deterioration of traffic system occurring when dynamic traffic assignment is performed in a network based on the classical Wardrop' user equilibrium (UE) and system optimum (SO) equilibrium.
非双倍条件下极大奇异积分算子的估计%ESTIMATES FOR THE MAXIMAL SINGULAR INTEGRALS WITHOUT DOUBLING CONDITION
阮建苗; 朱相荣
2005-01-01
It is shown that the maximal singular integral operator with kernels satisfying H(o)rmander's condition is of weak type (1,1) and Lp(1
conditions,such estimates can be obtained by using a Cotlar's inequality.This inequality is not applicable here and it is noticeable that the Cotlar's inequality maybe fails under H(o)rmander's condition.
DeJong, Stacey L.; Lang, Catherine E.
2012-01-01
Objectives Although healthy individuals have less force production capacity during bilateral muscle contractions compared to unilateral efforts, emerging evidence suggests that certain aspects of paretic upper limb task performance after stroke may be enhanced by moving bilaterally instead of unilaterally. We investigated whether the bilateral movement condition affects grip force differently on the paretic side of people with post-stroke hemiparesis, compared to their non-paretic side and both sides of healthy young adults. Methods Within a single session, we compared: 1) maximal grip force during unilateral vs. bilateral contractions on each side, and 2) force contributed by each side during a 30% submaximal bilateral contraction. Results Healthy controls produced less grip force in the bilateral condition, regardless of side (- 2.4% difference), and similar findings were observed on the non-paretic side of people with hemiparesis (- 4.5% difference). On the paretic side, however, maximal grip force was increased by the bilateral condition in most participants (+11.3% difference, on average). During submaximal bilateral contractions in each group, the two sides each contributed the same percentage of unilateral maximal force. Conclusions The bilateral condition facilitates paretic limb grip force at maximal, but not submaximal levels. Significance In some people with post-stroke hemiparesis, the paretic limb may benefit from bilateral training with high force requirements. PMID:22248812
Brendle, Joerg
2016-01-01
We show that, consistently, there can be maximal subtrees of P (omega) and P (omega) / fin of arbitrary regular uncountable size below the size of the continuum. We also show that there are no maximal subtrees of P (omega) / fin with countable levels. Our results answer several questions of Campero, Cancino, Hrusak, and Miranda.
Kerner, Boris S
2016-01-01
We show that the minimization of travel times in a network as generally accepted in classical traffic and transportation theories deteriorates the traffic system through a considerable increase in the probability of traffic breakdown in the network. We introduce a network characteristic {\\it minimum network capacity} that shows that rather than the minimization of travel times in the network, the minimization of the probability of traffic breakdown in the network maximizes the network throughput at which free flow persists in the whole network.
夏天; 李友光; 王学仁
2011-01-01
拟似然非线性模型包括广义线性模型作为一个特殊情形.给出了拟似然非线性模型中极大拟似然估计的弱相合性的一些充分条件,其中矩的条件要弱于文献中极大拟似然估计的强相合性的条件.%Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case. This paper proposes some sufficient conditions of weak consistency of maximum quasi- likelihood estimator (MQLE) in QLNM, in which the condition of the moment is weaker than that of strong consistency of MQLE in the existing literature.
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
Cornelis, Marilyn C; Agrawal, Arpana; Cole, John W; Hansel, Nadia N; Barnes, Kathleen C; Beaty, Terri H; Bennett, Siiri N; Bierut, Laura J; Boerwinkle, Eric; Doheny, Kimberly F; Feenstra, Bjarke; Feingold, Eleanor; Fornage, Myriam; Haiman, Christopher A; Harris, Emily L; Hayes, M Geoffrey; Heit, John A; Hu, Frank B; Kang, Jae H; Laurie, Cathy C; Ling, Hua; Manolio, Teri A; Marazita, Mary L; Mathias, Rasika A; Mirel, Daniel B; Paschall, Justin; Pasquale, Louis R; Pugh, Elizabeth W; Rice, John P; Udren, Jenna; van Dam, Rob M; Wang, Xiaojing; Wiggs, Janey L; Williams, Kayleen; Yu, Kai
2010-05-01
Genome-wide association studies (GWAS) have emerged as powerful means for identifying genetic loci related to complex diseases. However, the role of environment and its potential to interact with key loci has not been adequately addressed in most GWAS. Networks of collaborative studies involving different study populations and multiple phenotypes provide a powerful approach for addressing the challenges in analysis and interpretation shared across studies. The Gene, Environment Association Studies (GENEVA) consortium was initiated to: identify genetic variants related to complex diseases; identify variations in gene-trait associations related to environmental exposures; and ensure rapid sharing of data through the database of Genotypes and Phenotypes. GENEVA consists of several academic institutions, including a coordinating center, two genotyping centers and 14 independently designed studies of various phenotypes, as well as several Institutes and Centers of the National Institutes of Health led by the National Human Genome Research Institute. Minimum detectable effect sizes include relative risks ranging from 1.24 to 1.57 and proportions of variance explained ranging from 0.0097 to 0.02. Given the large number of research participants (N>80,000), an important feature of GENEVA is harmonization of common variables, which allow analyses of additional traits. Environmental exposure information available from most studies also enables testing of gene-environment interactions. Facilitated by its sizeable infrastructure for promoting collaboration, GENEVA has established a unified framework for genotyping, data quality control, analysis and interpretation. By maximizing knowledge obtained through collaborative GWAS incorporating environmental exposure information, GENEVA aims to enhance our understanding of disease etiology, potentially identifying opportunities for intervention.
Park, W J; Ahn, J H
2011-10-01
The objective of this study was to find optimum microwave pretreatment conditions for methane production and methane yield in anaerobic sludge digestion. The sludge was pretreated using a laboratory-scale industrial microwave unit (2450 MHz frequency). Microwave temperature increase rate (TIR) (2.9-17.1 degrees C/min) and final temperature (FT) (52-108 degrees C) significantly affected solubilization, methane production, and methane yield. Solubilization degree (soluble chemical oxygen demand (COD)/total COD) in the pretreated sludge (3.3-14.7%) was clearly higher than that in the raw sludge (2.6%). Within the design boundaries, the optimum conditions for maximum methane production (2.02 L/L) were TIR = 9.1 degrees C/min and FT = 90 degrees C, and the optimum conditions for maximum methane yield (809 mL/g VS(removed)) were TIR 7.1 degrees C/min and FT = 92 degrees C.
Xiaoliang eCheng
2013-12-01
Full Text Available Production of biofuels via enzymatic hydrolysis of complex plant polysaccharides is a subject of intense global interest. Microbial communities are known to express a wide range of enzymes necessary for the saccharification of lignocellulosic feedstocks and serve as a powerful reservoir for enzyme discovery. However, the growth temperature and conditions that yield high cellulase activity vary widely, and the throughput to identify optimal conditions has been limited by the slow handling and conventional analysis. A rapid method that uses small volumes of isolate culture to resolve specific enzyme activity is needed. In this work, a high throughput nanostructure-initiator mass spectrometry (NIMS based approach was developed for screening a thermophilic cellulolytic actinomycete, Thermobispora bispora, for β-glucosidase production under various growth conditions. Media that produced high β-glucosidase activity were found to be I/S + glucose or microcrystalline cellulose (MCC, Medium 84 + rolled oats, and M9TE + MCC at 45 °C. Supernatants of cell cultures grown in M9TE + 1% MCC cleaved 2.5 times more substrate at 45 °C than at all other temperatures. While T. bispora is reported to grow optimally at 60 °C in Medium 84 + rolled oats and M9TE + 1% MCC, approximately 40% more conversion was observed at 45 °C. This high throughput NIMS approach may provide an important tool in discovery and characterization of enzymes from environmental microbes for industrial and biofuel applications.
Hou, Wencheng; Zhang, Wei; Chen, Guode; Luo, Yanping
2016-01-01
Melaleuca bracteata is a yellow-leaved tree belonging to the Melaleuca genus. Species from this genus are known to be good sources of natural antioxidants, for example, the "tea tree oil" derived from M. alternifolia is used in food processing to extend the shelf life of products. In order to determine whether M. bracteata contains novel natural antioxidants, the components of M. bracteata ethanol extracts were analyzed by gas chromatography-mass spectrometry. Total phenolic and flavonoid contents were extracted and the antioxidant activities of the extracts evaluated. Single-factor experiments, central composite rotatable design (CCRD) and response surface methodology (RSM) were used to optimize the extraction conditions for total phenolic content (TPC) and total flavonoid content (TFC). Ferric reducing power (FRP) and 1,1-Diphenyl-2-picrylhydrazyl radical (DPPH·) scavenging capacity were used as the evaluation indices of antioxidant activity. The results showed that the main components of M. bracteata ethanol extracts are methyl eugenol (86.86%) and trans-cinnamic acid methyl ester (6.41%). The single-factor experiments revealed that the ethanol concentration is the key factor determining the TPC, TFC, FRP and DPPH·scavenging capacity. RSM results indicated that the optimal condition of all four evaluation indices was achieved by extracting for 3.65 days at 53.26°C in 34.81% ethanol. Under these conditions, the TPC, TFC, FRP and DPPH·scavenging capacity reached values of 88.6 ± 1.3 mg GAE/g DW, 19.4 ± 0.2 mg RE/g DW, 2.37 ± 0.01 mM Fe2+/g DW and 86.0 ± 0.3%, respectively, which were higher than those of the positive control, methyl eugenol (FRP 0.97 ± 0.02 mM, DPPH·scavenging capacity 58.6 ± 0.7%) at comparable concentrations. Therefore, the extracts of M. bracteata leaves have higher antioxidant activity, which did not only attributed to the methyl eugenol. Further research could lead to the development of a potent new natural antioxidant.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Likelihood inference for unions of interacting discs
Møller, Jesper; Helisova, K.
2010-01-01
with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point......-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Inference in HIV dynamics models via hierarchical likelihood
Commenges, D; Putter, H; Thiebaut, R
2010-01-01
HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.
Likelihood inference for a nonstationary fractional autoregressive model
Johansen, Søren; Ørregård Nielsen, Morten
2010-01-01
the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...... d and b, and prove that they converge in distribution. We use the results to prove consistency of the maximum likelihood estimator for d,b in a large compact subset of {1/2...
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
MAXIMS VIOLATIONS IN LITERARY WORK
Widya Hanum Sari Pertiwi
2015-12-01
Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
无
2007-01-01
This paper addresses the problems of parameter estimation of multivariable stationary stochastic systems on the basis of observed output data. The main contribution is to employ the expectation-maximisation (EM) method as a means for computation of the maximum-likelihood (ML) parameter estimation of the system. Closed form of the expectation of the studied system subjected to Gaussian distribution noise is derived and paraneter choice that maximizes the expectation is also proposed. This results in an iterative algorithm for parameter estimation and the robust algorithm implementation based on technique of QR-factorization and Cholesky factorization is also discussed. Moreover, algorithmic properties such as non-decreasing likelihood value, necessary and sufficient conditions for the algorithm to arrive at a local stationary parameter, the convergence rate and the factors affecting the convergence rate are analyzed. Simulation study shows that the proposed algorithm has attractive properties such as numerical stability, and avoidance of difficult initial conditions.
曹添建; 凌能祥
2012-01-01
In this paper,the confidence interval of conditional quantile in the absence and presence of some anxiliary information is given on the basis of empirical likelihood when response variable satisfies the random missing mechanism. It is also shown that the asymptotic efficacy of test is not be reduced as the information is added. It extends the results in the related literature.%本文利用经验似然的思想,分别构造在响应变量满足随机缺失(MAR)机制的条件下,不含附加信息和含附加信息时条件分位数的置信区间,并说明检验的渐近功效随信息量的增加而非降,推广了现有文献中的相应结果.
Comparison of sinogram- and image-domain penalized-likelihood image reconstruction estimators.
Vargas, Phillip A; La Rivière, Patrick J
2011-08-01
In recent years, the authors and others have been exploring the use of penalized-likelihood sinogram-domain smoothing and restoration approaches for emission and transmission tomography. The motivation for this strategy was initially pragmatic: to provide a more computationally feasible alternative to fully iterative penalized-likelihood image reconstruction involving expensive backprojections and reprojections, while still obtaining some of the benefits of the statistical modeling employed in penalized-likelihood approaches. In this work, the authors seek to compare the two approaches in greater detail. The sinogram-domain strategy entails estimating the "ideal" line integrals needed for reconstruction of an activity or attenuation distribution from the set of noisy, potentially degraded tomographic measurements by maximizing a penalized-likelihood objective function. The objective function models the data statistics as well as any degradation that can be represented in the sinogram domain. The estimated line integrals can then be input to analytic reconstruction algorithms such as filtered backprojection (FBP). The authors compare this to fully iterative approaches maximizing similar objective functions. The authors present mathematical analyses based on so-called equivalent optimization problems that establish that the approaches can be made precisely equivalent under certain restrictive conditions. More significantly, by use of resolution-variance tradeoff studies, the authors show that they can yield very similar performance under more relaxed, realistic conditions. The sinogram- and image-domain approaches are equivalent under certain restrictive conditions and can perform very similarly under more relaxed conditions. The match is particularly good for fully sampled, high-resolution CT geometries. One limitation of the sinogram-domain approach relative to the image-domain approach is the difficulty of imposing additional constraints, such as image non-negativity.
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.
Jensen Just
2004-01-01
Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.
Likelihood analysis of the I(2) model
Johansen, Søren
1997-01-01
The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum like...
Community detection in networks: Modularity optimization and maximum likelihood are equivalent
Newman, M E J
2016-01-01
We demonstrate an exact equivalence between two widely used methods of community detection in networks, the method of modularity maximization in its generalized form which incorporates a resolution parameter controlling the size of the communities discovered, and the method of maximum likelihood applied to the special case of the stochastic block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Likelihood methods and classical burster repetition
Graziani, C; Graziani, Carlo; Lamb, Donald Q
1995-01-01
We develop a likelihood methodology which can be used to search for evidence of burst repetition in the BATSE catalog, and to study the properties of the repetition signal. We use a simplified model of burst repetition in which a number N_{\\rm r} of sources which repeat a fixed number of times N_{\\rm rep} are superposed upon a number N_{\\rm nr} of non-repeating sources. The instrument exposure is explicitly taken into account. By computing the likelihood for the data, we construct a probability distribution in parameter space that may be used to infer the probability that a repetition signal is present, and to estimate the values of the repetition parameters. The likelihood function contains contributions from all the bursts, irrespective of the size of their positional errors --- the more uncertain a burst's position is, the less constraining is its contribution. Thus this approach makes maximal use of the data, and avoids the ambiguities of sample selection associated with data cuts on error circle size. We...
Likelihood inference for unions of interacting discs
Møller, Jesper; Helisová, Katarina
is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results......To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Likelihood alarm displays. [for human operator
Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.
1988-01-01
In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.
BOUNDEDNESS OF MAXIMAL SINGULAR INTEGRALS
CHEN JIECHENG; ZHU XIANGRONG
2005-01-01
The authors study the singular integrals under the Hormander condition and the measure not satisfying the doubling condition. At first, if the corresponding singular integral is bounded from L2 to itseff, it is proved that the maximal singu lar integral is bounded from L∞ to RBMO except that it is infinite μ-a.e. on Rd. A sufficient condition and a necessary condition such that the maximal singular integral is bounded from L2 to itself are also obtained. There is a small gap between the two conditions.
In all likelihood statistical modelling and inference using likelihood
Pawitan, Yudi
2001-01-01
Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode
Practical likelihood analysis for spatial generalized linear mixed models
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are, respectiv......We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...
Empirical likelihood estimation of discretely sampled processes of OU type
SUN ShuGuang; ZHANG XinSheng
2009-01-01
This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.
Profit maximization mitigates competition
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
Analytic Methods for Cosmological Likelihoods
Taylor, A. N.; Kitching, T. D.
2010-01-01
We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model
Johansen, Søren; Nielsen, Morten Ørregaard
2012-01-01
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters......likelihood estimators. To this end we prove weak convergence of the conditional likelihood as a continuous stochastic...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...
Tapered composite likelihood for spatial max-stable models
Sang, Huiyan
2014-05-01
Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.
Shang, Yilun
2016-08-01
How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.
Obtaining reliable Likelihood Ratio tests from simulated likelihood functions
Andersen, Laura Mørch
It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...
Recent developments in maximum likelihood estimation of MTMM models for categorical data
Minjeong eJeon
2014-04-01
Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.
Maximally incompatible quantum observables
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Operational Modal Analysis using Expectation Maximization Algorithm
Cara Cañas, Francisco Javier; Carpio Huertas, Jaime; Juan Ruiz, Jesús; Alarcón Álvarez, Enrique
2011-01-01
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method...
INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION
Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei
2011-01-01
A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.
Lessons about likelihood functions from nuclear physics
Hanson, Kenneth M
2007-01-01
Least-squares data analysis is based on the assumption that the normal (Gaussian) distribution appropriately characterizes the likelihood, that is, the conditional probability of each measurement d, given a measured quantity y, p(d | y). On the other hand, there is ample evidence in nuclear physics of significant disagreements among measurements, which are inconsistent with the normal distribution, given their stated uncertainties. In this study the histories of 99 measurements of the lifetimes of five elementary particles are examined to determine what can be inferred about the distribution of their values relative to their stated uncertainties. Taken as a whole, the variations in the data are somewhat larger than their quoted uncertainties would indicate. These data strongly support using a Student t distribution for the likelihood function instead of a normal. The most probable value for the order of the t distribution is 2.6 +/- 0.9. It is shown that analyses based on long-tailed t-distribution likelihood...
Parker, Andrew M.; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Sieve likelihood ratio inference on general parameter space
SHEN Xiaotong; SHI Jian
2005-01-01
In this paper,a theory on sieve likelihood ratio inference on general parameter spaces(including infinite dimensional) is studied.Under fairly general regularity conditions,the sieve log-likelihood ratio statistic is proved to be asymptotically x2 distributed,which can be viewed as a generalization of the well-known Wilks' theorem.As an example,a emiparametric partial linear model is investigated.
Maximal inequalities for demimartingales and their applications
WANG XueJun; HU ShuHe
2009-01-01
In this paper,we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides.The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob's type maximal inequality for demimartingales,strong laws of large numbers and growth rate for demimartingales and associated random variables.At last,we give an equivalent condition of uniform integrability for demisubmartingales.
Maximal inequalities for demimartingales and their applications
无
2009-01-01
In this paper, we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides. The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob’s type maximal inequality for demimartingales, strong laws of large numbers and growth rate for demimartingales and associated random variables. At last, we give an equivalent condition of uniform integrability for demisubmartingales.
Ming Yi WANG; Guo ZHAO
2005-01-01
A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
Brüstle, Thomas; Pérotin, Matthieu
2012-01-01
Maximal green sequences are particular sequences of quiver mutations which were introduced by Keller in the context of quantum dilogarithm identities and independently by Cecotti-Cordova-Vafa in the context of supersymmetric gauge theory. Our aim is to initiate a systematic study of these sequences from a combinatorial point of view. Interpreting maximal green sequences as paths in various natural posets arising in representation theory, we prove the finiteness of the number of maximal green sequences for cluster finite quivers, affine quivers and acyclic quivers with at most three vertices. We also give results concerning the possible numbers and lengths of these maximal green sequences. Finally we describe an algorithm for computing maximal green sequences for arbitrary valued quivers which we used to obtain numerous explicit examples that we present.
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
Song, M K; Kim, H W; Rhee, M S
2016-06-01
We previously reported that a combination of heat and relative humidity (RH) had a marked bactericidal effect on Escherichia coli O157:H7 on radish seeds. Here, response surface methodology with a Box-Behnken design was used to build a model to predict reductions in E. coli O157:H7 populations based on three independent variables: heating temperature (55 °C, 60 °C, or 65 °C), RH (40%, 60%, and 80%), and holding time (8, 15, or 22 h). Optimum treatment conditions were selected using a desirability function. The predictive model for microbial reduction had a high regression coefficient (R(2) = 0.97), and the accuracy of the model was verified using validation data (R(2) = 0.95). Among the three variables examined, heating temperature (P seed germination, respectively. The optimum conditions for microbial reduction (6.6 log reduction) determined by ridge analysis were as follows: 64.5 °C and 63.2% RH for 17.7 h. However, when both microbial reduction and germination rate were taken into consideration, the desirability function yielded optimal conditions of 65 °C and 40% RH for 8 h (6.6 log reduction in the bacterial population; 94.4% of seeds germinated). This study provides comprehensive data that improve our understanding of the effects of heating temperature, RH, and holding time on the E. coli O157:H7 population on radish seeds. Radish seeds can be exposed to these conditions before sprouting, which greatly increases the microbiological safety of the products.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Likelihood Inference for a Nonstationary Fractional Autoregressive Model
Johansen, Søren; Nielsen, Morten Ørregaard
values Xº-n, n = 0, 1, ..., under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to ?find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II...
Likelihood inference for a nonstationary fractional autoregressive model
Johansen, Søren; Nielsen, Morten Ørregaard
values X0-n, n = 0, 1,...,under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d-b; where d ≥ b > 1/2 are parameters to be estimated. We model the data X1,...,XT given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II....
Empirical likelihood estimation of discretely sampled processes of OU type
2009-01-01
This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi- tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some mild conditions. When the background driving Lévy process is of type A or B, we show that the intensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.
Posterior distributions for likelihood ratios in forensic science.
van den Hout, Ardo; Alberink, Ivo
2016-09-01
Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Section 9: Ground Water - Likelihood of Release
HRS training. the ground water pathway likelihood of release factor category reflects the likelihood that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.
Phylogenetic estimation with partial likelihood tensors
Sumner, J G
2008-01-01
We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.
Rudiger Bubner
1998-12-01
Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.
Workshop on Likelihoods for the LHC Searches
2013-01-01
The primary goal of this 3‐day workshop is to educate the LHC community about the scientific utility of likelihoods. We shall do so by describing and discussing several real‐world examples of the use of likelihoods, including a one‐day in‐depth examination of likelihoods in the Higgs boson studies by ATLAS and CMS.
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Empirical likelihood method in survival analysis
Zhou, Mai
2015-01-01
Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric
Fast inference in generalized linear models via expected log-likelihoods.
Ramirez, Alexandro D; Paninski, Liam
2014-04-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.
Applications of expectation maximization algorithm for coherent optical communication
Carvalho, L.; Oliveira, J.; Zibar, Darko
2014-01-01
In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization al...
Joint Iterative Carrier Synchronization and Signal Detection Employing Expectation Maximization
Zibar, Darko; de Carvalho, Luis Henrique Hecker; Estaran Tolosa, Jose Manuel
2014-01-01
In this paper, joint estimation of carrier frequency, phase, signal means and noise variance, in a maximum likelihood sense, is performed iteratively by employing expectation maximization. The parameter estimation is soft decision driven and allows joint carrier synchronization and data detection...
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Bagchi, Arunabha; ten Brummelhuis, P.G.J.; ten Brummelhuis, P.G.J.
1990-01-01
A method to estimate simultaneously states and parameters of a discrete-time hyperbolic system with noisy boundary conditions is presented. This method is based on maximization of a likelihood (ML) function. The ML function leads to a two-point boundary value problem of considerable complexity.
Maximal Hypersurfaces in Spacetimes with Translational Symmetry
Bulawa, Andrew
2016-01-01
We consider four-dimensional vacuum spacetimes which admit a free isometric spacelike R-action. Taking a quotient with respect to the R-action produces a three-dimensional quotient spacetime. We establish several results regarding maximal hypersurfaces (spacelike hypersurfaces of zero mean curvature) in quotient spacetimes. First, we show that complete noncompact maximal hypersurfaces must either be flat cylinders S^1 x R or conformal to the Euclidean plane. Second, we establish a positive mass theorem for certain maximal hypersurfaces. Finally, while it is meaningful to use a bounded lapse when adopting the maximal hypersurface gauge condition in the four-dimensional (asymptotically flat) setting, it is shown here that nontrivial quotient spacetimes admit the maximal hypersurface gauge only with an unbounded lapse.
Louis de Grange
2010-09-01
Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
The Laplace Likelihood Ratio Test for Heteroscedasticity
J. Martin van Zyl
2011-01-01
Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.
Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model
Johansen, Søren; Nielsen, Morten Ørregaard
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß...
Maximum likelihood estimation for life distributions with competing failure modes
Sidik, S. M.
1979-01-01
The general model for the competing failure modes assuming that location parameters for each mode are expressible as linear functions of the stress variables and the failure modes act independently is presented. The general form of the likelihood function and the likelihood equations are derived for the extreme value distributions, and solving these equations using nonlinear least squares techniques provides an estimate of the asymptotic covariance matrix of the estimators. Monte-Carlo results indicate that, under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slightly biased, and the asymptotic covariances are rapidly approached.
On Groups Whose Proper Quotients Are FCc-Groups with the Maximal Condition%其真商群为满足极大条件的FCc-群的群
张志让; 邢世奇
2012-01-01
如果FCc-群满足极大条件,那么称之为(FCc) max群；如果群G的所有真商群都是(FCc) max-群,但是G本身不是(FCc) max-群,那么称群G为外(FCc)-max群.主要利用外FNc-群的结果,给出外(FCc)max群的结构描述,同时还推广了群的上下中心列的有限性条件.%A group is an (FCc )max-group if it is an FCc-group with the maximal condition. Group G is said to be a just non-(FCc )max-group if all of its proper quotients are (FCc )max-groups, but G itself is not. The main purpose of this article is to give a description of the structure of just non-(FCc )max-groups by making use of the results of just non-FNc-groups. Hall on finiteness conditions of upper and lower central series of groups is given.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
Maximum likelihood tuning of a vehicle motion filter
Trankle, Thomas L.; Rabin, Uri H.
1990-01-01
This paper describes the use of maximum likelihood parameter estimation unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the likelihood functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the likelihood function of the data in order to estimate the unknown parameters. The parameter estimation algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.
Cherchi, Elisabetta; Guevara, Cristian
2012-01-01
. In a series of Monte Carlo experiments, evidence suggested four main conclusions: (a) efficiency increased when the true variance-covariance matrix became diagonal, (b) EM was more robust to the curse of dimensionality in regard to efficiency and estimation time, (c) EM did not recover the true scale...
Janusz Brzozowski
2014-05-01
Full Text Available The atoms of a regular language are non-empty intersections of complemented and uncomplemented quotients of the language. Tight upper bounds on the number of atoms of a language and on the quotient complexities of atoms are known. We introduce a new class of regular languages, called the maximally atomic languages, consisting of all languages meeting these bounds. We prove the following result: If L is a regular language of quotient complexity n and G is the subgroup of permutations in the transition semigroup T of the minimal DFA of L, then L is maximally atomic if and only if G is transitive on k-subsets of 1,...,n for 0 <= k <= n and T contains a transformation of rank n-1.
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Likelihood analysis of earthquake focal mechanism distributions
Kagan, Y Y
2014-01-01
In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...
Maximal elements of non necessarily acyclic binary relations
Josep Enric Peris Ferrando; Begoña Subiza Martínez
1992-01-01
The existence of maximal elements for binary preference relations is analyzed without imposing transitivity or convexity conditions. From each preference relation a new acyclic relation is defined in such a way that some maximal elements of this new relation characterize maximal elements of the original one. The result covers the case whereby the relation is acyclic.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes
Zimmermann, Ralf
2010-01-01
The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood......: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training...
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Brandes, U; Gaertler, M; Goerke, R; Hoefer, M; Nikoloski, Z; Wagner, D
2006-01-01
Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances.
杜智涛; 霍国庆; 刘丽红
2011-01-01
国家信息资源（NIR）是推动国家竞争力提升的战略性资源。NIR的内容体系由物理层、元数据层、国家基础数据层、分类信息层和国家决策信息层五个层面构成。探讨实现NIR价值最大化目标的边际条件，描绘NIR的边际成本与边际收益曲线；并基于目标规划方法建立NIR投入优化决策模型。%National information resource（NIR） is a kind of strategic resource to promote national competitiveness. The content sytstem of NIR consists of the layer of information about physical properties, the layer of metadata, the layer of information about national basic data, the layer of classified information and the layer of information for national decisiveness. This paper discusses the marginal conditions for maximizing value of NIR, depictes the marginal cost curve and marginal benefit curve of NIR, and establishes investment optimization decision model of NIR based on the goal programming model.
$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.
Maximizing without difficulty: A modified maximizing scale and its correlates
Linda Lai
2010-01-01
This article presents several studies that replicate and extend previous research on maximizing. A modified scale for measuring individual maximizing tendency is introduced. The scale has adequate psychometric properties and reflects maximizers' aspirations for high standards and their preference for extensive alternative search, but not the decision difficulty aspect included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cogniti...
Likelihood Analysis of Supersymmetric SU(5) GUTs
Bagnaschi, E.
2017-01-01
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...
Dimension-independent likelihood-informed MCMC
Cui, Tiangang
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Likelihood Analysis of Supersymmetric SU(5) GUTs
Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Martínez Santos, D.; Olive, K.A.; Richards, A.; de Vries, K.J.; Weiglein, G.
2016-01-01
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...
Likelihood Analysis of Supersymmetric SU(5) GUTs
Bagnaschi, E. [DESY; Costa, J. C. [Imperial Coll., London; Sakurai, K. [Warsaw U.; Borsato, M. [Santiago de Compostela U.; Buchmueller, O. [Imperial Coll., London; Cavanaugh, R. [Illinois U., Chicago; Chobanova, V. [Santiago de Compostela U.; Citron, M. [Imperial Coll., London; De Roeck, A. [Antwerp U.; Dolan, M. J. [Melbourne U.; Ellis, J. R. [King' s Coll. London; Flächer, H. [Bristol U.; Heinemeyer, S. [Madrid, IFT; Isidori, G. [Zurich U.; Lucio, M. [Santiago de Compostela U.; Martínez Santos, D. [Santiago de Compostela U.; Olive, K. A. [Minnesota U., Theor. Phys. Inst.; Richards, A. [Imperial Coll., London; de Vries, K. J. [Imperial Coll., London; Weiglein, G. [DESY
2016-10-31
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel ${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ${\\tilde \
Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model
Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard
. The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock......We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...
Introductory statistical inference with the likelihood function
Rohde, Charles A
2014-01-01
This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians. After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with secti...
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Krings, Franciska; Facchin, Stephanie
2009-01-01
This study demonstrated relations between men's perceptions of organizational justice and increased sexual harassment proclivities. Respondents reported higher likelihood to sexually harass under conditions of low interactional justice, suggesting that sexual harassment likelihood may increase as a response to perceived injustice. Moreover, the…
HEMI: Hyperedge Majority Influence Maximization
Gangal, Varun; Narayanam, Ramasuri
2016-01-01
In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...... with 30% (v/v) ethanol or saline, respectively. Relative viscosity was used as one measure of physical properties of the emulsion. Higher degrees of sensitization (but not rates) were obtained at the 48 h challenge reading with the oil/propylene glycol and oil/saline + ethanol emulsions compared...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Improved Likelihood Function in Particle-based IR Eye Tracking
Satria, R.; Sorensen, J.; Hammoud, R.
2005-01-01
In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... performance in challenging sequences with test subjects showing large head movements and under significant light conditions....
Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets
Litvinenko, Alexander
2017-09-03
We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.
Inferring fixed effects in a mixed linear model from an integrated likelihood
Gianola, Daniel; Sorensen, Daniel
2008-01-01
of all nuisances, viewing random effects and variance components as missing data. In a simulation of a grazing trial, the procedure was compared with four widely used estimators of fixed effects in mixed models, and found to be competitive. An analysis of body weight in freshwater crayfish was conducted......A new method for likelihood-based inference of fixed effects in mixed linear models, with variance components treated as nuisance parameters, is presented. The method uses uniform-integration of the likelihood; the implementation employs the expectation-maximization (EM) algorithm for elimination...
Likelihood inference for a fractionally cointegrated vector autoregressive model
Johansen, Søren; Nielsen, Morten Ørregaard
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order d-b; that is, there exist vectors β for which β......′X_{t} is fractional of order d-b. The parameters d and b satisfy either d≥b≥1/2, d=b≥1/2, or d=d_{0}≥b≥1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2≤b≤d≤d_{1} for any d_{1}≥d_{0}. To this end, we consider the conditional likelihood as a stochastic...... process in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of β is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We...
Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model
Johansen, Søren; Nielsen, Morten Ørregaard
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß......'X(t) is fractional of order d-b. The parameters d and b satisfy either d=b=1/2, d=b=1/2, or d=d0=b=1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2=b=d=d1 for any d1=d0. To this end, we consider the conditional likelihood as a stochastic process...... in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of ß is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We also find...
Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.
1986-05-01
consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol
Swanepoel, Konrad J
2011-01-01
A subset of a normed space X is called equilateral if the distance between any two points is the same. Let m(X) be the smallest possible size of an equilateral subset of X maximal with respect to inclusion. We first observe that Petty's construction of a d-dimensional X of any finite dimension d >= 4 with m(X)=4 can be generalised to show that m(X\\oplus_1\\R)=4 for any X of dimension at least 2 which has a smooth point on its unit sphere. By a construction involving Hadamard matrices we then show that both m(\\ell_p) and m(\\ell_p^d) are finite and bounded above by a function of p, for all 1 1 such that m(X) <= d+1 for all d-dimensional X with Banach-Mazur distance less than c from \\ell_p^d. Using Brouwer's fixed-point theorem we show that m(X) <= d+1 for all d-\\dimensional X with Banach-Mazur distance less than 3/2 from \\ell_\\infty^d. A graph-theoretical argument furthermore shows that m(\\ell_\\infty^d)=d+1. The above results lead us to conjecture that m(X) <= 1+\\dim X.
Unified Maximally Natural Supersymmetry
Huang, Junwu
2016-01-01
Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...
Conditional log-likelihood MDL and Evolutionary MCMC
Drugan, M.M.
2006-01-01
In the current society there is an increasing interest in intelligent techniques that can automatically process, analyze, and summarize the ever growing amount of data. Artificial intelligence is a research field that studies intelligent algorithms to support people in making decisions. Algorithms t
Likelihood analysis of supersymmetric SU(5) GUTs
Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others
2016-10-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.
Likelihood analysis of supersymmetric SU(5) GUTs
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)
2017-02-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching;
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
Maximizing oil yields may not optimize economics
1987-03-01
The Los Alamos National Laboratory has used the ASPEN computer code to calculate the economics of different hydroretorting conditions. When the oil yield was maximized and a oil shale plant designed around this process, the costs turned out much higher than expected. However, calculations based on runs of less than maximum yields showed lower cost estimates. It is recommended that future efforts should be concentrated on minimizing production costs rather than maximizing yields. An oil shale plant has been designed around minimum production cost, but has not been able to be tested experimentally.
Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik
2008-01-01
estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty...
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Sums of magnetic eigenvalues are maximal on rotationally symmetric domains
Laugesen, Richard S; Roy, Arindam
2011-01-01
The sum of the first n energy levels of the planar Laplacian with constant magnetic field of given total flux is shown to be maximal among triangles for the equilateral triangle, under normalization of the ratio (moment of inertia)/(area)^3 on the domain. The result holds for both Dirichlet and Neumann boundary conditions, with an analogue for Robin (or de Gennes) boundary conditions too. The square similarly maximizes the eigenvalue sum among parallelograms, and the disk maximizes among ellipses. More generally, a domain with rotational symmetry will maximize the magnetic eigenvalue sum among all linear images of that domain. These results are new even for the ground state energy (n=1).
Maximal subgroups of finite groups
S. Srinivasan
1990-01-01
Full Text Available In finite groups maximal subgroups play a very important role. Results in the literature show that if the maximal subgroup has a very small index in the whole group then it influences the structure of the group itself. In this paper we study the case when the index of the maximal subgroups of the groups have a special type of relation with the Fitting subgroup of the group.
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
Finding Maximal Quasiperiodicities in Strings
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...
Maximizing Entropy over Markov Processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Maximizing entropy over Markov processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Composite likelihood estimation of demographic parameters
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Maximizing Complementary Quantities by Projective Measurements
M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu
2017-04-01
In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Mean likelihood estimation of target micro-motion parameters in laser detection
Guo, Liren; Hu, Yihua; Wang, Yunpeng
2016-10-01
Maximum Likelihood Estimation(MLE) is the optimal estimator for Micro-Doppler feature extracting. However, the enormous computational burden of the grid search and the existence of many local maxima of the respective highly nonlinear cost function are harmful for accurate estimation. A new method combining the Mean Likelihood Estimation(MELE) and the Monte Carlo(MC) way is proposed to solve this problem. A closed-form expression to evaluate the parameters which maximize the cost function is derived. Then the compressed likelihood function is designed to obtain the global maximum. Finally the parameters are estimated by calculating the circular mean of the samples get from MC method. The high dependence of accurate initials and the computational complexity of the iteration algorithms are avoided in this method. Applied to the simulated and experimental data, the proposed method achieves similar performance as MLE but less computational amount. Meanwhile, this method guarantees the global convergence and joint parameter estimation.
Maintaining symmetry of simulated likelihood functions
Andersen, Laura Mørch
This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...
Synthesizing Regression Results: A Factored Likelihood Method
Wu, Meng-Jia; Becker, Betsy Jane
2013-01-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Maintaining symmetry of simulated likelihood functions
Andersen, Laura Mørch
This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...
Likelihood based testing for no fractional cointegration
Lasak, Katarzyna
We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case. The s...
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
夏天; 孔繁超
2008-01-01
This paper proposes some regularity conditions.On the basis of the proposed regularity conditions,we show the strong consistency of maximum quasi-likelihood estimation (MQLE)in quasi-likelihood nonlinear models (QLNM).Our results may he regarded as a further generalization of the relevant results in Ref.[4].
Maximally entangled states in pseudo-telepathy games
Mančinska, Laura
2015-01-01
A pseudo-telepathy game is a nonlocal game which can be won with probability one using some finite-dimensional quantum strategy but not using a classical one. Our central question is whether there exist two-party pseudo-telepathy games which cannot be won with probability one using a maximally entangled state. Towards answering this question, we develop conditions under which maximally entangled states suffice. In particular, we show that maximally entangled states suffice for weak projection...
Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.
Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man
2009-10-01
In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.
Gonzalez-Sanchez, Jon
2010-01-01
Let $w = w(x_1,..., x_n)$ be a word, i.e. an element of the free group $F =$ on $n$ generators $x_1,..., x_n$. The verbal subgroup $w(G)$ of a group $G$ is the subgroup generated by the set $\\{w (g_1,...,g_n)^{\\pm 1} | g_i \\in G, 1\\leq i\\leq n \\}$ of all $w$-values in $G$. We say that a (finite) group $G$ is $w$-maximal if $|G:w(G)|> |H:w(H)|$ for all proper subgroups $H$ of $G$ and that $G$ is hereditarily $w$-maximal if every subgroup of $G$ is $w$-maximal. In this text we study $w$-maximal and hereditarily $w$-maximal (finite) groups.
Bias-reduced and separation-proof conditional logistic regression with small or sparse data sets.
Heinze, Georg; Puhr, Rainer
2010-03-30
Conditional logistic regression is used for the analysis of binary outcomes when subjects are stratified into several subsets, e.g. matched pairs or blocks. Log odds ratio estimates are usually found by maximizing the conditional likelihood. This approach eliminates all strata-specific parameters by conditioning on the number of events within each stratum. However, in the analyses of both an animal experiment and a lung cancer case-control study, conditional maximum likelihood (CML) resulted in infinite odds ratio estimates and monotone likelihood. Estimation can be improved by using Cytel Inc.'s well-known LogXact software, which provides a median unbiased estimate and exact or mid-p confidence intervals. Here, we suggest and outline point and interval estimation based on maximization of a penalized conditional likelihood in the spirit of Firth's (Biometrika 1993; 80:27-38) bias correction method (CFL). We present comparative analyses of both studies, demonstrating some advantages of CFL over competitors. We report on a small-sample simulation study where CFL log odds ratio estimates were almost unbiased, whereas LogXact estimates showed some bias and CML estimates exhibited serious bias. Confidence intervals and tests based on the penalized conditional likelihood had close-to-nominal coverage rates and yielded highest power among all methods compared, respectively. Therefore, we propose CFL as an attractive solution to the stratified analysis of binary data, irrespective of the occurrence of monotone likelihood. A SAS program implementing CFL is available at: http://www.muw.ac.at/msi/biometrie/programs.
Corporate governance effect on financial distress likelihood: Evidence from Spain
Montserrat Manzaneque
2016-01-01
Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.
Maximizing without difficulty: A modified maximizing scale and its correlates
Lai, Linda
2010-01-01
... included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cognition, desire for consistency, risk aversion, intrinsic motivation, self-efficacy and perceived workload, whereas...
Maximizing and customer loyalty: Are maximizers less loyal?
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Are maximizers really unhappy? The measurement of maximizing tendency,
Dalia L. Diab
2008-06-01
Full Text Available Recent research suggesting that people who maximize are less happy than those who satisfice has received considerable fanfare. The current study investigates whether this conclusion reflects the construct itself or rather how it is measured. We developed an alternative measure of maximizing tendency that is theory-based, has good psychometric properties, and predicts behavioral outcomes. In contrast to the existing maximization measure, our new measure did not correlate with life (dissatisfaction, nor with most maladaptive personality and decision-making traits. We conclude that the interpretation of maximizers as unhappy may be due to poor measurement of the construct. We present a more reliable and valid measure for future researchers to use.
Principles of maximally classical and maximally realistic quantum mechanics
S M Roy
2002-08-01
Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive phase space density. Here I formulate a stationary principle which gives a nonperturbative deﬁnition of a maximally classical as well as maximally realistic phase space density. I show that the maximally classical trajectories are in fact exactly classical in the simple examples of coherent states and bound states of an oscillator and Gaussian free particle states. In contrast, it is known that the de Broglie–Bohm realistic theory gives highly nonclassical trajectories.
M. Venkatesulu
1996-01-01
Full Text Available Solutions of initial value problems associated with a pair of ordinary differential systems (L1,L2 defined on two adjacent intervals I1 and I2 and satisfying certain interface-spatial conditions at the common end (interface point are studied.
ESTIMATES FOR THE MAXIMAL MULTILINEAR SINGULAR INTEGRAL OPERATORS
Yulan Jiao
2010-01-01
In this paper,some mapping properties are considered for the maximal multilinear singular integral operator whose kernel satisfies certain minimum regularity condition.It is proved that certain uniform local estimate for doubly truncated operators implies the LP(Rn)(1
maximal operator.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Composite likelihood method for inferring local pedigrees
Nielsen, Rasmus
2017-01-01
Pedigrees contain information about the genealogical relationships among individuals and are of fundamental importance in many areas of genetic studies. However, pedigrees are often unknown and must be inferred from genetic data. Despite the importance of pedigree inference, existing methods are limited to inferring only close relationships or analyzing a small number of individuals or loci. We present a simulated annealing method for estimating pedigrees in large samples of otherwise seemingly unrelated individuals using genome-wide SNP data. The method supports complex pedigree structures such as polygamous families, multi-generational families, and pedigrees in which many of the member individuals are missing. Computational speed is greatly enhanced by the use of a composite likelihood function which approximates the full likelihood. We validate our method on simulated data and show that it can infer distant relatives more accurately than existing methods. Furthermore, we illustrate the utility of the method on a sample of Greenlandic Inuit. PMID:28827797
Sums of Laplace eigenvalues - rotationally symmetric maximizers in the plane
Laugesen, R S
2010-01-01
The sum of the first $n \\geq 1$ eigenvalues of the Laplacian is shown to be maximal among triangles for the equilateral triangle, maximal among parallelograms for the square, and maximal among ellipses for the disk, provided the ratio $\\text{(area)}^3/\\text{(moment of inertia)}$ for the domain is fixed. This result holds for both Dirichlet and Neumann eigenvalues, and similar conclusions are derived for Robin boundary conditions and Schr\\"odinger eigenvalues of potentials that grow at infinity. A key ingredient in the method is the tight frame property of the roots of unity. For general convex plane domains, the disk is conjectured to maximize sums of Neumann eigenvalues.
Factors Associated with Young Adults’ Pregnancy Likelihood
Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan
2014-01-01
OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849
Simplified likelihood for the re-interpretation of public CMS results
The CMS Collaboration
2017-01-01
In this note, a procedure for the construction of simplified likelihoods for the re-interpretation of the results of CMS searches for new physics is presented. The procedure relies on the use of a reduced set of information on the background models used in these searches which can readily be provided by the CMS collaboration. A toy example is used to demonstrate the procedure and its accuracy in reproducing the full likelihood for setting limits in models for physics beyond the standard model. Finally, two representative searches from the CMS collaboration are used to demonstrate the validity of the simplified likelihood approach under realistic conditions.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Database likelihood ratios and familial DNA searching
Slooten, Klaas
2012-01-01
Familial Searching is the process of searching in a DNA database for relatives of a given individual. It is well known that in order to evaluate the genetic evidence in favour of a certain given form of relatedness between two individuals, one needs to calculate the appropriate likelihood ratio, which is in this context called a Kinship Index. Suppose that the database contains, for a given type of relative, at most one related individual. Given prior probabilities of being the relative for all persons in the database, we derive the likelihood ratio for each database member in favour of being that relative. This likelihood ratio takes all the Kinship Indices between target and members of the database into account. We also compute the corresponding posterior probabilities. We then discuss two ways of selecting a subset from the database that contains the relative with a known probability, or at least a useful lower bound thereof. We discuss the relation between these approaches and illustrate them with Familia...
Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.
Chor, Benny; Snir, Sagi
2004-12-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.
Similar tests and the standardized log likelihood ratio statistic
Jensen, Jens Ledet
1986-01-01
When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...
Maximizing ROI with yield management
Neil Snyder
2001-01-01
.... the technology is based on the concept of yield management, which aims to sell the right product to the right customer at the right price and the right time therefore maximizing revenue, or yield...
Are CEOs Expected Utility Maximizers?
John List; Charles Mason
2009-01-01
Are individuals expected utility maximizers? This question represents much more than academic curiosity. In a normative sense, at stake are the fundamental underpinnings of the bulk of the last half-century's models of choice under uncertainty. From a positive perspective, the ubiquitous use of benefit-cost analysis across government agencies renders the expected utility maximization paradigm literally the only game in town. In this study, we advance the literature by exploring CEO's preferen...
Gaussian maximally multipartite entangled states
Facchi, Paolo; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-01-01
We introduce the notion of maximally multipartite entangled states (MMES) in the context of Gaussian continuous variable quantum systems. These are bosonic multipartite states that are maximally entangled over all possible bipartitions of the system. By considering multimode Gaussian states with constrained energy, we show that perfect MMESs, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of MMESs and their frustration for n <= 7.
All maximally entangling unitary operators
Cohen, Scott M. [Department of Physics, Duquesne University, Pittsburgh, Pennsylvania 15282 (United States); Department of Physics, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
Witten spinors on maximal, conformally flat hypersurfaces
Frauendiener, Jörg; Szabados, László B
2011-01-01
The boundary conditions that exclude zeros of the solutions of the Witten equation (and hence guarantee the existence of a 3-frame satisfying the so-called special orthonormal frame gauge conditions) are investigated. We determine the general form of the conformally invariant boundary conditions for the Witten equation, and find the boundary conditions that characterize the constant and the conformally constant spinor fields among the solutions of the Witten equations on compact domains in extrinsically and intrinsically flat, and on maximal, intrinsically globally conformally flat spacelike hypersurfaces, respectively. We also provide a number of exact solutions of the Witten equation with various boundary conditions (both at infinity and on inner or outer boundaries) that single out nowhere vanishing spinor fields on the flat, non-extreme Reissner--Nordstr\\"om and Brill--Lindquist data sets. Our examples show that there is an interplay between the boundary conditions, the global topology of the hypersurface...
BAI YUN-XIA; QIN YONG-SONG; WANG LI-RONG; LI LING
2009-01-01
Suppose that there axe two populations x and y with missing data on both of them, where x has a distribution function F(.) which is unknown and y has form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.
On divergences tests for composite hypotheses under composite likelihood
Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos
2016-01-01
It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...
Dimension-Independent Likelihood-Informed MCMC
Cui, Tiangang
2015-01-07
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.
CMB Power Spectrum Likelihood with ILC
Dick, Jason; Delabrouille, Jacques
2012-01-01
We extend the ILC method in harmonic space to include the error in its CMB estimate. This allows parameter estimation routines to take into account the effect of the foregrounds as well as the errors in their subtraction in conjunction with the ILC method. Our method requires the use of a model of the foregrounds which we do not develop here. The reduction of the foreground level makes this method less sensitive to unaccounted for errors in the foreground model. Simulations are used to validate the calculations and approximations used in generating this likelihood function.
An improved likelihood model for eye tracking
Hammoud, Riad I.; Hansen, Dan Witzner
2007-01-01
approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...... are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model...
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
Maximal regularity of second order delay equations in Banach spaces
无
2010-01-01
We give necessary and sufficient conditions of Lp-maximal regularity(resp.B sp ,q-maximal regularity or F sp ,q-maximal regularity) for the second order delay equations:u″(t)=Au(t) + Gu’t + F u t + f(t), t ∈ [0, 2π] with periodic boundary conditions u(0)=u(2π), u′(0)=u′(2π), where A is a closed operator in a Banach space X,F and G are delay operators on Lp([-2π, 0];X)(resp.Bsp ,q([2π, 0];X) or Fsp,q([-2π, 0;X])).
Maximum likelihood reconstruction for Ising models with asynchronous updates
Zeng, Hong-Li; Aurell, Erik; Hertz, John; Roudi, Yasser
2012-01-01
We describe how the couplings in a non-equilibrium Ising model can be inferred from observing the model history. Two cases of an asynchronous update scheme are considered: one in which we know both the spin history and the update times (times at which an attempt was made to flip a spin) and one in which we only know the spin history (i.e., the times at which spins were actually flipped). In both cases, maximizing the likelihood of the data leads to exact learning rules for the couplings in the model. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and not on the specific spin history. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectatio...
LIKEDM: Likelihood calculator of dark matter detection
Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang
2017-04-01
With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.
Multiplicative earthquake likelihood models incorporating strain rates
Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.
2017-01-01
SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Ross S Williamson
2015-04-01
Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W
2015-04-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Algebraic curves of maximal cyclicity
Caubergh, Magdalena; Dumortier, Freddy
2006-01-01
The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.
CUSUM control charts based on likelihood ratio for preliminary analysis
Yi DAI; Zhao-jun WANG; Chang-liang ZOU
2007-01-01
To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method.Sullivan and woodall pointed out the test statistic lrt (n1, n2) is approximately distributed as x2 (2) as the sample size n, n1 and n2 are very large, and the value of n1 = 2, 3,..., n- 2 and that of n2 = n- n1.So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained.In addition, the properties of the standardized likelihood ratio statistic slr(n1,n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i ≠ n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both.Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.
CUSUM control charts based on likelihood ratio for preliminary analysis
2007-01-01
To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Composite likelihood and two-stage estimation in family studies
Andersen, Elisabeth Anne Wreford
2002-01-01
Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...
Inference in HIV dynamics models via hierarchical likelihood
2010-01-01
HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...
On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.
Fischer, Gerhard H.
1981-01-01
Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)
Nonparametric likelihood based estimation of linear filters for point processes
Hansen, Niels Richard
2015-01-01
result is a representation of the gradient of the log-likelihood, which we use to derive computable approximations of the log-likelihood and the gradient by time discretization. These approximations are then used to minimize the approximate penalized log-likelihood. For time and memory efficiency...
On the maximal efficiency of the collisional Penrose process
Leiderschneider, Elly
2015-01-01
The center of mass (CM) energy in a collisional Penrose process - a collision taking place within the ergosphere of a Kerr black hole - can diverge under suitable extreme conditions (maximal Kerr, near horizon collision and suitable impact parameters). We present an analytic expression for the CM energy, refining expressions given in the literature. Even though the CM energy diverges, we show that the maximal energy attained by a particle that escapes the black hole's gravitational pull and reaches infinity is modest. We obtain an analytic expression for the energy of an escaping particle resulting from a collisional Penrose process, and apply it to derive the maximal energy and the maximal efficiency for several physical scenarios: pair annihilation, Compton scattering, and the elastic scattering of two massive particles. In all physically reasonable cases (in which the incident particles initially fall from infinity towards the black hole) the maximal energy (and the corresponding efficiency) are only one o...
Understanding maximal repetitions in strings
Crochemore, Maxime
2008-01-01
The cornerstone of any algorithm computing all repetitions in a string of length n in O(n) time is the fact that the number of runs (or maximal repetitions) is O(n). We give a simple proof of this result. As a consequence of our approach, the stronger result concerning the linearity of the sum of exponents of all runs follows easily.
Maximizing the Probability of Detecting an Electromagnetic Counterpart of Gravitational-wave Events
Coughlin, Michael W
2016-01-01
Compact binary coalescences are a promising source of gravitational waves for second-generation interferometric gravitational-wave detectors such as advanced LIGO and advanced Virgo. These are among the most promising sources for joint detection of electromagnetic (EM) and gravitational-wave (GW) emission. To maximize the science performed with these objects, it is essential to undertake a followup observing strategy that maximizes the likelihood of detecting the EM counterpart. We present a follow-up strategy that maximizes the counterpart detection probability, given a fixed investment of telescope time. We show how the prior assumption on the luminosity function of the electro-magnetic counterpart impacts the optimized followup strategy. Our results suggest that if the goal is to detect an EM counterpart from among a succession of GW triggers, the optimal strategy is to perform long integrations in the highest likelihood regions, with a time investment that is proportional to the $2/3$ power of the surface...
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
A Predictive Likelihood Approach to Bayesian Averaging
Tomáš Jeřábek
2015-01-01
Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.
Groups, information theory, and Einstein's likelihood principle
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.
Parameter likelihood of intrinsic ellipticity correlations
Capranico, Federica; Schaefer, Bjoern Malte
2012-01-01
Subject of this paper are the statistical properties of ellipticity alignments between galaxies evoked by their coupled angular momenta. Starting from physical angular momentum models, we bridge the gap towards ellipticity correlations, ellipticity spectra and derived quantities such as aperture moments, comparing the intrinsic signals with those generated by gravitational lensing, with the projected galaxy sample of EUCLID in mind. We investigate the dependence of intrinsic ellipticity correlations on cosmological parameters and show that intrinsic ellipticity correlations give rise to non-Gaussian likelihoods as a result of nonlinear functional dependencies. Comparing intrinsic ellipticity spectra to weak lensing spectra we quantify the magnitude of their contaminating effect on the estimation of cosmological parameters and find that biases on dark energy parameters are very small in an angular-momentum based model in contrast to the linear alignment model commonly used. Finally, we quantify whether intrins...
Dishonestly increasing the likelihood of winning
Shaul Shalvi
2012-05-01
Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.
Rius, Jordi
2006-09-01
The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].
FINDING REGULATORY ELEMENTS USING JOINT LIKELIHOODS FOR SEQUENCE AND EXPRESSION PROFILE DATA.
IAN HOLMES, UC BERKELEY, CA, WILLIAM J. BRUNO, LANL
2000-08-20
A recent, popular method of finding promoter sequences is to look for conserved motifs up-stream of genes clustered on the basis of expression data. This method presupposes that the clustering is correct. Theoretically, one should be better able to find promoter sequences and create more relevant gene clusters by taking a unified approach to these two problems. We present a likelihood function for a sequence-expression model giving a joint likelihood for a promoter sequence and its corresponding expression levels. An algorithm to estimate sequence-expression model parameters using Gibbs sampling and Expectation/Maximization is described. A program, called kimono, that implements this algorithm has been developed and the source code is freely available over the internet.
Empirical Likelihood for Mixed-effects Error-in-variables Model
Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui
2009-01-01
This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...... on the numbers of cycles in graphs depending on numbers of vertices and edges, girth, and homomorphisms to small fixed graphs; and use the bounds to show that among regular graphs, the conjecture holds. We also consider graphs that are close to being regular, with the minimum and maximum degrees differing...
Berthon, P; Fellmann, N
2002-09-01
The maximal aerobic velocity concept developed since eighties is considered as either the minimal velocity which elicits the maximal aerobic consumption or as the "velocity associated to maximal oxygen consumption". Different methods for measuring maximal aerobic velocity on treadmill in laboratory conditions have been elaborated, but all these specific protocols measure V(amax) either during a maximal oxygen consumption test or with an association of such a test. An inaccurate method presents a certain number of problems in the subsequent use of the results, for example in the elaboration of training programs, in the study of repeatability or in the determination of individual limit time. This study analyzes 14 different methods to understand their interests and limits in view to propose a general methodology for measuring V(amax). In brief, the test should be progressive and maximal without any rest period and of 17 to 20 min total duration. It should begin with a five min warm-up at 60-70% of the maximal aerobic power of the subjects. The beginning of the trial should be fixed so that four or five steps have to be run. The duration of the steps should be three min with a 1% slope and an increasing speed of 1.5 km x h(-1) until complete exhaustion. The last steps could be reduced at two min for a 1 km x h(-1) increment. The maximal aerobic velocity is adjusted in relation to duration of the last step.
Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.
Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z
2007-08-15
Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.
Note on maximal distance separable codes
YANG Jian-sheng; WANG De-xiu; JIN Qing-fang
2009-01-01
In this paper, the maximal length of maximal distance separable(MDS)codes is studied, and a new upper bound formula of the maximal length of MDS codes is obtained. Especially, the exact values of the maximal length of MDS codes in some parameters are given.
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Likelihood analysis of the minimal AMSB model
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
REDUCING THE LIKELIHOOD OF LONG TENNIS MATCHES
Tristan Barnett
2006-12-01
Full Text Available Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match
Reducing the likelihood of long tennis matches.
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cardiorespiratory Coordination in Repeated Maximal Exercise
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Cardiorespiratory Coordination in Repeated Maximal Exercise.
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
Maximum-likelihood estimation of haplotype frequencies in nuclear families.
Becker, Tim; Knapp, Michael
2004-07-01
The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.
Vilanova, Pedro
2016-01-07
In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
Asymptotics of robust utility maximization
Knispel, Thomas
2012-01-01
For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.
Multivariate residues and maximal unitarity
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Beeping a Maximal Independent Set
Afek, Yehuda; Alon, Noga; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot...
Maximal Congruences on Some Semigroups
Jintana Sanwong; R.P. Sullivan
2007-01-01
In 1976 Howie proved that a finite congruence-free semigroup is a simple group if it has at least three elements but no zero elementInfinite congruence-free semigroups are far more complicated to describe, but some have been constructed using semigroups of transformations (for example, by Howie in 1981 and by Marques in 1983)Here, forcertain semigroups S of numbers and of transformations, we determine all congruences p on S such that S/p is congruence-free, that is, we describe all maximal congruences on such semigroups S.
Estimating nonlinear dynamic equilibrium economies: a likelihood approach
2004-01-01
This paper presents a framework to undertake likelihood-based inference in nonlinear dynamic equilibrium economies. The authors develop a sequential Monte Carlo algorithm that delivers an estimate of the likelihood function of the model using simulation methods. This likelihood can be used for parameter estimation and for model comparison. The algorithm can deal both with nonlinearities of the economy and with the presence of non-normal shocks. The authors show consistency of the estimate and...
Likelihood ratios: Clinical application in day-to-day practice
Parikh Rajul
2009-01-01
Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Likelihood-based scoring rules for comparing density forecasts in tails
Diks, C.; Panchenko, V.; van Dijk, D.
2011-01-01
We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler d
Jie Li DING; Xi Ru CHEN
2006-01-01
For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.
Inkmann, J.
2005-01-01
The inverse probability weighted Generalised Empirical Likelihood (IPW-GEL) estimator is proposed for the estimation of the parameters of a vector of possibly non-linear unconditional moment functions in the presence of conditionally independent sample selection or attrition.The estimator is applied
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
Inapproximability of maximal strip recovery
Jiang, Minghui
2009-01-01
In comparative genomic, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given $d$ genomic maps as sequences of gene markers, the objective of \\msr{d} is to find $d$ subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant $d \\ge 2$, a polynomial-time 2d-approximation for \\msr{d} was previously known. In this paper, we show that for any $d \\ge 2$, \\msr{d} is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provi...
Maximal right smooth extension chains
Huang, Yun Bao
2010-01-01
If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...
Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG
2008-01-01
In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.
Study on the Hungarian algorithm for the maximum likelihood data association problem
Wang Jianguo; He Peikun; Cao Wei
2007-01-01
A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the na(i)ve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.
Litvinenko, Alexander
2017-09-26
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Su, Min; Fang, Liang; Su, Zheng
2013-05-01
Dichotomizing a continuous biomarker is a common practice in medical research. Various methods exist in the literature for dichotomizing continuous biomarkers. The most widely adopted minimum p-value approach uses a sequence of test statistics for all possible dichotomizations of a continuous biomarker, and it chooses the cutpoint that is associated with the maximum test statistic, or equivalently, the minimum p-value of the test. We herein propose a likelihood and resampling-based approach to dichotomizing a continuous biomarker. In this approach, the cutpoint is considered as an unknown variable in addition to the unknown outcome variables, and the likelihood function is maximized with respect to the cutpoint variable as well as the outcome variables to obtain the optimal cutpoint for the continuous biomarker. The significance level of the test for whether a cutpoint exists is assessed via a permutation test using the maximum likelihood values calculated based on the original as well as the permutated data sets. Numerical comparisons of the proposed approach and the minimum p-value approach showed that the proposed approach was not only more powerful in detecting the cutpoint but also provided markedly more accurate estimates of the cutpoint than the minimum p-value approach in all the simulation scenarios considered.
K. Yao
2007-12-01
Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation CramÃƒÂ©r-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Regularization parameter selection for penalized-likelihood list-mode image reconstruction in PET
Zhang, Mengxi; Zhou, Jian; Niu, Xiaofeng; Asma, Evren; Wang, Wenli; Qi, Jinyi
2017-06-01
Penalized likelihood (PL) reconstruction has demonstrated potential to improve image quality of positron emission tomography (PET) over unregularized ordered-subsets expectation-maximization (OSEM) algorithm. However, selecting proper regularization parameters in PL reconstruction has been challenging due to the lack of ground truth and variation of penalty functions. Here we present a method to choose regularization parameters using a cross-validation log-likelihood (CVLL) function. This new method does not require any knowledge of the true image and is directly applicable to list-mode PET data. We performed statistical analysis of the mean and variance of the CVLL. The results show that the CVLL provides an unbiased estimate of the log-likelihood function calculated using the noise free data. The predicted variance can be used to verify the statistical significance of the difference between CVLL values. The proposed method was validated using simulation studies and also applied to real patient data. The reconstructed images using optimum parameters selected by the proposed method show good image quality visually.
The maximal D = 4 supergravities
Wit, Bernard de [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, NL-3508 TD Utrecht (Netherlands); Samtleben, Henning [Laboratoire de Physique, ENS Lyon, 46 allee d' Italie, F-69364 Lyon CEDEX 07 (France); Trigiante, Mario [Dept. of Physics, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)
2007-06-15
All maximal supergravities in four space-time dimensions are presented. The ungauged Lagrangians can be encoded in an E{sub 7(7)}-Sp(56; R)/GL(28) matrix associated with the freedom of performing electric/magnetic duality transformations. The gauging is defined in terms of an embedding tensor {theta} which encodes the subgroup of E{sub 7(7)} that is realized as a local invariance. This embedding tensor may imply the presence of magnetic charges which require corresponding dual gauge fields. The latter can be incorporated by using a recently proposed formulation that involves tensor gauge fields in the adjoint representation of E{sub 7(7)}. In this formulation the results take a universal form irrespective of the electric/magnetic duality basis. We present the general class of supersymmetric and gauge invariant Lagrangians and discuss a number of applications.
Maximizing profit using recommender systems
Das, Aparna; Ricketts, Daniel
2009-01-01
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
The maximal D=5 supergravities
de Wit, Bernard; Trigiante, M; Wit, Bernard de; Samtleben, Henning; Trigiante, Mario
2007-01-01
The general Lagrangian for maximal supergravity in five spacetime dimensions is presented with vector potentials in the \\bar{27} and tensor fields in the 27 representation of E_6. This novel tensor-vector system is subject to an intricate set of gauge transformations, describing 3(27-t) massless helicity degrees of freedom for the vector fields and 3t massive spin degrees of freedom for the tensor fields, where the (even) value of t depends on the gauging. The kinetic term of the tensor fields is accompanied by a unique Chern-Simons coupling which involves both vector and tensor fields. The Lagrangians are completely encoded in terms of the embedding tensor which defines the E_6 subgroup that is gauged by the vectors. The embedding tensor is subject to two constraints which ensure the consistency of the combined vector-tensor gauge transformations and the supersymmetry of the full Lagrangian. This new formulation encompasses all possible gaugings.
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
Planck 2013 results. XV. CMB power spectra and likelihood
Tauber, Jan; Bartlett, J.G.; Bucher, M.;
2014-01-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...
EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS
QinYongsong; JiangBo; LiYufang
2005-01-01
In this paper，the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.
Empirical likelihood inference for diffusion processes with jumps
无
2010-01-01
In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.
On the shape and likelihood of oceanic rogue waves.
Benetazzo, Alvise; Ardhuin, Fabrice; Bergamasco, Filippo; Cavaleri, Luigi; Guimarães, Pedro Veras; Schwendeman, Michael; Sclavo, Mauro; Thomson, Jim; Torsello, Andrea
2017-08-15
We consider the observation and analysis of oceanic rogue waves collected within spatio-temporal (ST) records of 3D wave fields. This class of records, allowing a sea surface region to be retrieved, is appropriate for the observation of rogue waves, which come up as a random phenomenon that can occur at any time and location of the sea surface. To verify this aspect, we used three stereo wave imaging systems to gather ST records of the sea surface elevation, which were collected in different sea conditions. The wave with the ST maximum elevation (happening to be larger than the rogue threshold 1.25H s) was then isolated within each record, along with its temporal profile. The rogue waves show similar profiles, in agreement with the theory of extreme wave groups. We analyze the rogue wave probability of occurrence, also in the context of ST extreme value distributions, and we conclude that rogue waves are more likely than previously reported; the key point is coming across them, in space as well as in time. The dependence of the rogue wave profile and likelihood on the sea state conditions is also investigated. Results may prove useful in predicting extreme wave occurrence probability and strength during oceanic storms.
Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie
2012-09-01
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
Maximum likelihood estimation for social network dynamics
Snijders, T.A.B.; Koskinen, J.; Schweinberger, M.
2010-01-01
A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The m
Optimal bounded control for maximizing reliability of Duhem hysteretic systems
Ming XU; Xiaoling JIN; Yong WANG; Zhilong HUANG
2015-01-01
The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent non-hysteretic system. Stochastic averaging is then implemented to obtain the Itˆo stochastic equation associated with the total energy of the vibrating system, appropriate for eval-uating system responses. Dynamical programming equations for maximizing system re-liability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equa-tion. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example.
Expectation Maximization for Hard X-ray Count Modulation Profiles
Benvenuto, Federico; Piana, Michele; Massone, Anna Maria
2013-01-01
This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized for the analysis of count modulation profiles in solar hard X-ray imaging based on Rotating Modulation Collimators. The algorithm described in this paper solves the maximum likelihood problem iteratively and encoding a positivity constraint into the iterative optimization scheme. The result is therefore a classical Expectation Maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, ...
Bayer, Christian
2016-02-20
© 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
A degree condition for maximal cycles in bipartite digraphs
Adamus, Janusz
2011-01-01
We prove a sharp Ore-type criterion for hamiltonicity of balanced bipartite digraphs: A bipartite digraph D, with colour classes of cardinality N, is hamiltonian if, for every pair of vertices u and v from opposite colour classes of D such that the arc u->v is not in D, the sum of the positive half-degree of u and the negative half-degree of v is greater than or equal to N+2.
Beeping a Maximal Independent Set
Afek, Yehuda; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possi...
Maximal switchability of centralized networks
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Simplicity and maximal commutative subalgebras of twisted generalized Weyl algebras
Hartwig, J.T.; Öinert, Per Johan
2013-01-01
conditions for certain TGWAs to be simple, in the case when R is commutative. We illustrate our theorems by considering some special classes of TGWAs and providing concrete examples. We also discuss how simplicity of a TGWA is related to the maximal commutativity of R and the (non-)existence of non...
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL
Vinod Kumar
2010-01-01
Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
Weisse, Andrea Y
2010-10-28
Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Quasi-likelihood estimation of average treatment effects based on model information
Zhi-hua SUN
2007-01-01
In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.
Quasi-likelihood estimation of average treatment effects based on model information
2007-01-01
In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.
Weighted Inequalities for the Generalized Maximal Operator in Martingale Spaces
Wei CHEN; Peide LIU
2011-01-01
The generalized maximal operator M in martingale spaces is considered.For 1 ＜ p ≤ q ＜ ∞,the authors give a necessary and sufficient condition on the pair (（μ),v)for M to be a bounded operator from martingale space LP(v) into Lq(（μ）) or weak-Lq(（μ）),where （μ） is a measure on Ω× N and v a weight on Ω.Moreover,the similar inequalities for usual maximal operator are discussed.
Simulating Entangling Unitary Operator Using Non-maximally Entangled States
LI Chun-Xian; WANG Cheng-Zhi; NIE Liu-Ying; LI Jiang-Fan
2009-01-01
We use non-maximally entangled states (NMESs) to simulate an entangling unitary operator (EUO) w/th a certain probability. Given entanglement resources, the probability of the success we achieve is a decreasing function of the parameters of the EUO. Given an EUO, for certain entanglement resources the result is optimal, i.e., the probability obtains a maximal value, and for optimal result higher parameters of the EUO match more amount of entanglement resources. The probability of the success we achieve is higher than the known results under some condition.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Young adult consumers' media usage and online purchase likelihood
Young adult consumers' media usage and online purchase likelihood. ... in new media applications such as the internet, email, blogging, twitter and social networks. ... Convenience sampling resulted in 1 298 completed questionnaires.
Empirical Likelihood Ratio Confidence Interval for Positively Associated Series
Jun-jian Zhang
2007-01-01
Empirical likelihood is discussed by using the blockwise technique for strongly stationary,positively associated random variables.Our results show that the statistics is asymptotically chi-square distributed and the corresponding confidence interval can be constructed.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
金丽萍; 崔世茂; 杜金伟; 金彩霞; 吴玉峰; 其日格
2009-01-01
Plant cell wall in the face of adversity will appear thicker,to withstand the adverse environment.lt was a preliminary identification of the cell wall due to the formation of the lignin deposition. At present, the synthesis of lignin that is not only a means,but Phenylalanine Ammonia-Lyase(PAL)and Cinnamic acid 4-Hydroxylase(C_4H) ,such as en-zyme in the synthesis played a very important role. As a result, this experiment with wild Prunus mongolia Maxim used to study drought stress in different ecological conditions, the impact of the body of its PAL and C_4H activity. The results showed that: when subjected to drought stress, Prunus mongolia Maxim almond leaves with the activity of PAL and C_4H drought stress increased gradually increased, and in arid regions in Prunus mongolia Maxim almond leaf PAL and C_4H activity than the relatively arid region. PAL and C_4H activity and plant drought resistance was positive correlation be-tween .%植物细胞壁在遭遇逆境后会出现增厚现象,以抵御不良环境,有研究认为这是由于细胞壁内的木质素沉积形成的.目前认为木质素的合成途径不只一条,但苯丙氨酸解氨酶(PAL)和肉桂酸4-羟基化酶(C_4H)等酶在合成中起到了十分重要的作用.因此,本试验以野生蒙古扁桃为试验材料,研究干旱胁迫对其在不同生态环境条件下体内PAL和C_4H酶活性影响的变化.结果表明:在遭受干旱胁迫时,蒙古扁桃叶内PAL和C_4H活性随干旱胁迫程度的增加而逐渐增强,且干旱地区的蒙古扁桃叶内PAL和C_4H活性要强于相对不干旱地区.PAL和C_4H活性与植物的抗旱性呈正相关关系.
A notion of graph likelihood and an infinite monkey theorem
Banerji, Christopher R S; Severini, Simone
2013-01-01
We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.
A notion of graph likelihood and an infinite monkey theorem
Banerji, Christopher R. S.; Mansour, Toufik; Severini, Simone
2014-01-01
We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.
On the likelihood function of Gaussian max-stable processes
Genton, M. G.
2011-05-24
We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.
Estimating dynamic equilibrium economies: linear versus nonlinear likelihood
2004-01-01
This paper compares two methods for undertaking likelihood-based inference in dynamic equilibrium economies: a sequential Monte Carlo filter proposed by Fernández-Villaverde and Rubio-Ramírez (2004) and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. The authors report two main results...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.
2013-01-01
An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.
Network channel allocation and revenue maximization
Hamalainen, Timo; Joutsensalo, Jyrki
2002-09-01
This paper introduces a model that can be used to share link capacity among customers under different kind of traffic conditions. This model is suitable for different kind of networks like the 4G networks (fast wireless access to wired network) to support connections of given duration that requires a certain quality of service. We study different types of network traffic mixed in a same communication link. A single link is considered as a bottleneck and the goal is to find customer traffic profiles that maximizes the revenue of the link. Presented allocation system accepts every calls and there is not absolute blocking, but the offered data rate/user depends on the network load. Data arrival rate depends on the current link utilization, user's payment (selected CoS class) and delay. The arrival rate is (i) increasing with respect to the offered data rate, (ii) decreasing with respect to the price, (iii) decreasing with respect to the network load, and (iv) decreasing with respect to the delay. As an example, explicit formula obeying these conditions is given and analyzed.
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
Employee Likelihood of Purchasing Health Insurance using Fuzzy Inference System
Lazim Abdullah
2012-01-01
Full Text Available Many believe that employees health and economic factors plays an important role in their likelihood to purchase health insurance. However decision to purchase health insurance is not trivial matters as many risk factors that influence decision. This paper presents a decision model using fuzzy inference system to identify the likelihoods of purchasing health insurance based on the selected risk factors. To build the likelihoods, data from one hundred and twenty eight employees at five organizations under the purview of Kota Star Municipality Malaysia were collected to provide input data. Three risk factors were considered as the input of the system including age, salary and risk of having illness. The likelihoods of purchasing health insurance was the output of the system and defined in three linguistic terms of Low, Medium and High. Input and output data were governed by the Mamdani inference rules of the system to decide the best linguistic term. The linguistic terms that describe the likelihoods of purchasing health insurance were identified by the system based on the three risk factors. It is found that twenty seven employees were likely to purchase health insurance at Low level and fifty six employees show their likelihoods at High level. The usage of fuzzy inference system would offer possible justifications to set a new approach in identifying prospective health insurance purchasers.
Parametric likelihood inference for interval censored competing risks data.
Hudgens, Michael G; Li, Chenxi; Fine, Jason P
2014-03-01
Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.
Task-oriented maximally entangled states
Agrawal, Pankaj; Pradhan, B, E-mail: agrawal@iopb.res.i, E-mail: bpradhan@iopb.res.i [Institute of Physics, Sachivalaya Marg, Bhubaneswar, Orissa 751 005 (India)
2010-06-11
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Inflation in maximal gauged supergravities
Kodama, Hideo [Theory Center, KEK,Tsukuba 305-0801 (Japan); Department of Particles and Nuclear Physics,The Graduate University for Advanced Studies,Tsukuba 305-0801 (Japan); Nozawa, Masato [Dipartimento di Fisica, Università di Milano, and INFN, Sezione di Milano,Via Celoria 16, 20133 Milano (Italy)
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Maximal supports and Schur-positivity among connected skew shapes
McNamara, Peter R W
2011-01-01
The Schur-positivity order on skew shapes is defined by B \\leq A if the difference s_A - s_B is Schur-positive. It is an open problem to determine those connected skew shapes that are maximal with respect to this ordering. A strong sufficient condition for the Schur-positivity of s_A - s_B is that the support of B is contained in that of A, where the support of B is defined to be the set of partitions lambda for which s_lambda appears in the Schur expansion of s_B. We show that to determine the maximal connected skew shapes in the Schur-positivity order and this support containment order, it suffices to consider a special class of ribbon shapes. We explicitly determine the support for these ribbon shapes, thereby determining the maximal connected skew shapes in the support containment order.
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K
2015-01-01
We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...
Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi
2016-04-01
Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures.
Yang Fengfan
2004-01-01
A new technique for turbo decoder is proposed by using a local subsidiary maximum likelihood decoding and a probability distributions family for the extrinsic information. The optimal distribution of the extrinsic information is dynamically specified for each component decoder.The simulation results show that the iterative decoder with the new technique outperforms that of the decoder with the traditional Gaussian approach for the extrinsic information under the same conditions.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
2005-01-01
The inverse probability weighted Generalised Empirical Likelihood (IPW-GEL) estimator is proposed for the estimation of the parameters of a vector of possibly non-linear unconditional moment functions in the presence of conditionally independent sample selection or attrition.The estimator is applied to the estimation of the firm size elasticity of product and process R&D expenditures using a panel of German manufacturing firms, which is affected by attrition and selection into R&D activities....
D. L. Bricker
1997-01-01
Full Text Available The problem of assigning cell probabilities to maximize a multinomial likelihood with order restrictions on the probabilies and/or restrictions on the local odds ratios is modeled as a posynomial geometric program (GP, a class of nonlinear optimization problems with a well-developed duality theory and collection of algorithms. (Local odds ratios provide a measure of association between categorical random variables. A constrained multinomial MLE example from the literature is solved, and the quality of the solution is compared with that obtained by the iterative method of El Barmi and Dykstra, which is based upon Fenchel duality. Exploiting the proximity of the GP model of MLE problems to linear programming (LP problems, we also describe as an alternative, in the absence of special-purpose GP software, an easily implemented successive LP approximation method for solving this class of MLE problems using one of the readily available LP solvers.
Benvenuto, Federico
2012-01-01
In this paper we propose a new statistical stopping rule for constrained maximum likelihood iterative algorithms applied to ill-posed inverse problems. To this aim we extend the definition of Tikhonov regularization in a statistical framework and prove that the application of the proposed stopping rule to the Iterative Space Reconstruction Algorithm (ISRA) in the Gaussian case and Expectation Maximization (EM) in the Poisson case leads to well defined regularization methods according to the given definition. We also prove that, if an inverse problem is genuinely ill-posed in the sense of Tikhonov, the same definition is not satisfied when ISRA and EM are optimized by classical stopping rule like Morozov's discrepancy principle, Pearson's test and Poisson discrepancy principle. The stopping rule is illustrated in the case of image reconstruction from data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). First, by using a simulated image consisting of structures analogous to those ...
Computing Maximally Supersymmetric Scattering Amplitudes
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at
Are all maximally entangled states pure?
Cavalcanti, D; Terra-Cunha, M O
2005-01-01
In this Letter we study if all maximally entangled states are pure through several entanglement monotones. Our conclusions allow us to generalize the idea of monogamy of entanglement. Then we propose a polygamy of entanglement, which express that if a general multipartite state is maximally entangled it is necessarily factorized by any other system.
Sampling and Representation Complexity of Revenue Maximization
Dughmi, Shaddin; Han, Li; Nisan, Noam
2014-01-01
We consider (approximate) revenue maximization in auctions where the distribution on input valuations is given via "black box" access to samples from the distribution. We observe that the number of samples required -- the sample complexity -- is tightly related to the representation complexity of an approximately revenue-maximizing auction. Our main results are upper bounds and an exponential lower bound on these complexities.
Lisonek, Petr
1996-01-01
our classifications confirmthe maximality of previously known sets, the results in E^7 and E^8are new. Their counterpart in dimension larger than 10is a set of unit vectors with only two values of inner products in the Lorentz space R^{d,1}.The maximality of this set again follows from a bound due...
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing b...
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Cohomology of Weakly Reducible Maximal Triangular Algebras
董浙; 鲁世杰
2000-01-01
In this paper, we introduce the concept of weakly reducible maximal triangular algebras φwhich form a large class of maximal triangular algebras. Let B be a weakly closed algebra containing 5φ, we prove that the cohomology spaces Hn(φ, B) (n≥1) are trivial.
Ritter, André; Durst, Jürgen; Gödel, Karl; Haas, Wilhelm; Michel, Thilo; Rieger, Jens; Weber, Thomas; Wucherer, Lukas; Anton, Gisela
2013-01-01
Phase-wrapping artifacts, statistical image noise and the need for a minimum amount of phase steps per projection limit the practicability of x-ray grating based phase-contrast tomography, when using filtered back projection reconstruction. For conventional x-ray computed tomography, the use of statistical iterative reconstruction algorithms has successfully reduced artifacts and statistical issues. In this work, an iterative reconstruction method for grating based phase-contrast tomography is presented. The method avoids the intermediate retrieval of absorption, differential phase and dark field projections. It directly reconstructs tomographic cross sections from phase stepping projections by the use of a forward projecting imaging model and an appropriate likelihood function. The likelihood function is then maximized with an iterative algorithm. The presented method is tested with tomographic data obtained through a wave field simulation of grating based phase-contrast tomography. The reconstruction result...
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Maximum-Likelihood Semiblind Equalization of Doubly Selective Channels Using the EM Algorithm
Gideon Kutz
2010-01-01
Full Text Available Maximum-likelihood semi-blind joint channel estimation and equalization for doubly selective channels and single-carrier systems is proposed. We model the doubly selective channel as an FIR filter where each filter tap is modeled as a linear combination of basis functions. This channel description is then integrated in an iterative scheme based on the expectation-maximization (EM principle that converges to the channel description vector estimation. We discuss the selection of the basis functions and compare various functions sets. To alleviate the problem of convergence to a local maximum, we propose an initialization scheme to the EM iterations based on a small number of pilot symbols. We further derive a pilot positioning scheme targeted to reduce the probability of convergence to a local maximum. Our pilot positioning analysis reveals that for high Doppler rates it is better to spread the pilots evenly throughout the data block (and not to group them even for frequency-selective channels. The resulting equalization algorithm is shown to be superior over previously proposed equalization schemes and to perform in many cases close to the maximum-likelihood equalizer with perfect channel knowledge. Our proposed method is also suitable for coded systems and as a building block for Turbo equalization algorithms.
Application of Artificial Bee Colony Algorithm to Maximum Likelihood DOA Estimation
Zhicheng Zhang; Jun Lin; Yaowu Shi
2013-01-01
Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation,but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use.To reduce the high computational burden of ML method and make it more suitable to engineering applications,we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation.As a recently proposed bio-inspired computing algorithm,ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment.It offers an excellent alternative to the conventional methods in ML-DOA estimation.The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence,Signal-to-Noise Ratio (SNR),and number of iterations.The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated.Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.
Hail the impossible: p-values, evidence, and likelihood.
Johansson, Tobias
2011-04-01
Significance testing based on p-values is standard in psychological research and teaching. Typically, research articles and textbooks present and use p as a measure of statistical evidence against the null hypothesis (the Fisherian interpretation), although using concepts and tools based on a completely different usage of p as a tool for controlling long-term decision errors (the Neyman-Pearson interpretation). There are four major problems with using p as a measure of evidence and these problems are often overlooked in the domain of psychology. First, p is uniformly distributed under the null hypothesis and can therefore never indicate evidence for the null. Second, p is conditioned solely on the null hypothesis and is therefore unsuited to quantify evidence, because evidence is always relative in the sense of being evidence for or against a hypothesis relative to another hypothesis. Third, p designates probability of obtaining evidence (given the null), rather than strength of evidence. Fourth, p depends on unobserved data and subjective intentions and therefore implies, given the evidential interpretation, that the evidential strength of observed data depends on things that did not happen and subjective intentions. In sum, using p in the Fisherian sense as a measure of statistical evidence is deeply problematic, both statistically and conceptually, while the Neyman-Pearson interpretation is not about evidence at all. In contrast, the likelihood ratio escapes the above problems and is recommended as a tool for psychologists to represent the statistical evidence conveyed by obtained data relative to two hypotheses. © 2010 The Author. Scandinavian Journal of Psychology © 2010 The Scandinavian Psychological Associations.
Kinnear, John; Jackson, Ruth
2017-07-01
Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pstatistical probability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
Ways to maximize effective collections.
Spears, Tracy L
2009-01-01
Medical practices across the country face a variety of collection challenges, especially when considering the condition of today's economy. It is now more important than ever for a practice to establish proactive collection procedures and learn the keys to minimizing collection problems. This starts with educating patients about your payment terms prior to appointments and educating your staff to be aware of early warning signs when an account could become a problem. Taking steps that lead to quick resolution, while retaining patients, is a vital component to increased cash flow and fewer aging accounts in receivables. Careful review of your practice's policies on billing and collections can lead to a greater knowledge on how healthy the practice really is. This article provides key strategies that will help streamline your billing and collections process and recover money owed to you while maintaining those ever so important patient relationships.
Maximizing hydrogen production by cyanobacteria
Bothe, H.; Winkelmann, S.; Boison, G. [Botanical Inst., The Univ. of Cologne, Cologne (Germany)
2008-03-15
When incubated anaerobically, in the light, in the presence of C{sub 2}H{sub 2} and high concentrations of H{sub 2}, both Mo-grown Anabaena variabilis and either Mo- or V-grown Anabaena azotica produce large amounts of H{sub 2} in addition to the H{sub 2} initially added. In contrast, C{sub 2}H{sub 2}-reduction is diminished under these conditions. The additional H{sub 2}-production mainly originates from nitrogenase with the V-enzyme being more effective than the Mo-protein. This enhanced H{sub 2}-production in the presence of added H{sub 2} and C{sub 2}H{sub 2} should be of interest in approaches to commercially exploit solar energy conversion by cyanobacterial photosynthesis for the generation of molecular hydrogen as a clean energy source. (orig.)
Maximum likelihood for genome phylogeny on gene content.
Zhang, Hongmei; Gu, Xun
2004-01-01
With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.
Factors Influencing the Intended Likelihood of Exposing Sexual Infidelity.
Kruger, Daniel J; Fisher, Maryanne L; Fitzgerald, Carey J
2015-08-01
There is a considerable body of literature on infidelity within romantic relationships. However, there is a gap in the scientific literature on factors influencing the likelihood of uninvolved individuals exposing sexual infidelity. Therefore, we devised an exploratory study examining a wide range of potentially relevant factors. Based in part on evolutionary theory, we anticipated nine potential domains or types of influences on the likelihoods of exposing or protecting cheaters, including kinship, strong social alliances, financial support, previous relationship behaviors (including infidelity and abuse), potential relationship transitions, stronger sexual and emotional aspects of the extra-pair relationship, and disease risk. The pattern of results supported these predictions (N = 159 men, 328 women). In addition, there appeared to be a small positive bias for participants to report infidelity when provided with any additional information about the situation. Overall, this study contributes a broad initial description of factors influencing the predicted likelihood of exposing sexual infidelity and encourages further studies in this area.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Generalized empirical likelihood methods for analyzing longitudinal data
Wang, S.
2010-02-16
Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.
Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs
Desjardins, Guillaume; Bengio, Yoshua
2010-01-01
Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...
IMPROVING VOICE ACTIVITY DETECTION VIA WEIGHTING LIKELIHOOD AND DIMENSION REDUCTION
Wang Huanliang; Han Jiqing; Li Haifeng; Zheng Tieran
2008-01-01
The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD.Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little.Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.
Penalized maximum likelihood estimation for generalized linear point processes
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
On the local existence of maximal slicings in spherically symmetric spacetimes
Cordero-Carrión, Isabel; Morales-Lladosa, Juan Antonio
2010-01-01
In this talk we show that any spherically symmetric spacetime admits locally a maximal spacelike slicing. The above condition is reduced to solve a decoupled system of first order quasi-linear partial differential equations. The solution may be accomplished analytical or numerically. We provide a general procedure to construct such maximal slicings.
On the local existence of maximal slicings in spherically symmetric spacetimes
Cordero-Carrion, Isabel; Ibanez, Jose MarIa; Morales-Lladosa, Juan Antonio, E-mail: isabel.cordero@uv.e, E-mail: jose.m.ibanez@uv.e, E-mail: antonio.morales@uv.e [Departamento de AstronomIa y Astrofisica, Universidad de Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Valencia (Spain)
2010-05-01
In this talk we show that any spherically symmetric spacetime admits locally a maximal spacelike slicing. The above condition is reduced to solve a decoupled system of first order quasi-linear partial differential equations. The solution may be accomplished analytical or numerically. We provide a general procedure to construct such maximal slicings.
Empirical likelihood-based evaluations of Value at Risk models
2009-01-01
Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.
LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2004-01-01
The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi......The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time....... The considered example is a ship sailing with a given speed through a Gaussian wave field....
Unbinned likelihood maximisation framework for neutrino clustering in Python
Coenders, Stefan [Technische Universitaet Muenchen, Boltzmannstr. 2, 85748 Garching (Germany)
2016-07-01
Albeit having detected an astrophysical neutrino flux with IceCube, sources of astrophysical neutrinos remain hidden up to now. A detection of a neutrino point source is a smoking gun for hadronic processes and acceleration of cosmic rays. The search for neutrino sources has many degrees of freedom, for example steady versus transient, point-like versus extended sources, et cetera. Here, we introduce a Python framework designed for unbinned likelihood maximisations as used in searches for neutrino point sources by IceCube. Implementing source scenarios in a modular way, likelihood searches on various kinds can be implemented in a user-friendly way, without sacrificing speed and memory management.
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis
Jansson, Michael; Nielsen, Morten Ørregaard
Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Likelihood-based inference for clustered line transect data
Waagepetersen, Rasmus; Schweder, Tore
2006-01-01
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... is implemented using markov chain Monte Carlo (MCMC) methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference are compared...
Approximated maximum likelihood estimation in multifractal random walks
Løvsletten, Ola
2011-01-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry et al., Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...
Robust utility maximization in a discontinuous filtration
Jeanblanc, Monique; Ngoupeyou, Armand
2012-01-01
We study a problem of utility maximization under model uncertainty with information including jumps. We prove first that the value process of the robust stochastic control problem is described by the solution of a quadratic-exponential backward stochastic differential equation with jumps. Then, we establish a dynamic maximum principle for the optimal control of the maximization problem. The characterization of the optimal model and the optimal control (consumption-investment) is given via a forward-backward system which generalizes the result of Duffie and Skiadas (1994) and El Karoui, Peng and Quenez (2001) in the case of maximization of recursive utilities including model with jumps.
Maximal annuli with parallel planar boundaries in the 3-dimensional Lorentz-Minkowski space
Pyo, Juncheol
2009-01-01
We prove that maximal annuli in $\\mathbb{L}^{3}$ bounded by circles, straight lines or cone points in a pair of parallel spacelike planes are part of either a Lorentzian catenoid or a Lorentzian Riemann's example. We show that under the same boundary condition, the same conclusion holds even when the maximal annuli have a planar end. Moreover, we extend Shiffman's convexity result to maximal annuli but by using Perron's method we construct a maximal annulus with a planar end where Shiffman type result fails.
RUDUNDANCY OF EMPIRICAL LIKELIHOOD%经验似然的冗余性
祝丽萍
2012-01-01
提出经验似然的冗余性和偏冗余性的概念,讨论了相应的冗余性等价条件,将GMM的冗余性结果推广到经验似然估计,同时模拟实验结果也证实了经验似然的冗余性和偏冗余性对估计的影响.%The redundancy and the partially redundancy of empirical likelihood are introduced and the equivalent conditions of redundancy results are obtained. Then the redundancy results of GMM are extended. Simulation results show the influence of the redundancy of empirical likelihood on the efficiency of estimators.
Empirical Likelihood Analysis of Longitudinal Data Involving Within-subject Correlation
Shuang HU; Lu LIN
2012-01-01
In this paper we use profile empirical likelihood to construct confidence regions for regression coefficients in partially linear model with longitudinal data.The main contribution is that the within-subject correlation is considered to improve estimation efficiency. We suppose a semi-parametric structure for the covariances of observation errors in each subject and employ both the first order and the second order moment conditions of the observation errors to construct the estimating equations.Although there are nonparametric estimators,the empirical log-likelihood ratio statistic still tends to a standard xp2 variable in distribution after the nuisance parameters are profiled away.A data simulation is also conducted.
WEIGHTED INEQUALITIES FOR THE GEOMETRIC MAXIMAL OPERATOR ON MARTINGALE SPACES
无
2008-01-01
In this article, the authors introduce two operators-geometrical maximal operator Mo and the closely related limiting operator M*O, then they give sufficient conditions under which the equality MO = M*O holds, and characterize the equivalent relations between the weak or strong type weighted inequality and the property of W∞-weight or W*∞-weight for the geometrical maximal operator in the case of two-weight condition. What should be mentioned is that the new operator-the geometrical minimal operator is equal to the limitation of the minimal operator sequence, and the results for the minimal operator proved in [12] makes the proof elegant and evident.
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Maximizing throughput by evaluating critical utilization paths
Weeda, P.J.
1991-01-01
Recently the relationship between batch structure, bottleneck machine and maximum throughput has been explored for serial, convergent and divergent process configurations consisting of two machines and three processes. In three of the seven possible configurations a multiple batch structure maximize
Relationship between maximal exercise parameters and individual ...
Relationship between maximal exercise parameters and individual time trial ... It is widely accepted that the ventilatory threshold (VT) is an important ... This study investigated whether the physiological responses during a 20km time trial (TT) ...
Maximizing the probability of detecting an electromagnetic counterpart of gravitational-wave events
Coughlin, Michael; Stubbs, Christopher
2016-10-01
Compact binary coalescences are a promising source of gravitational waves for second-generation interferometric gravitational-wave detectors such as advanced LIGO and advanced Virgo. These are among the most promising sources for joint detection of electromagnetic (EM) and gravitational-wave (GW) emission. To maximize the science performed with these objects, it is essential to undertake a followup observing strategy that maximizes the likelihood of detecting the EM counterpart. We present a follow-up strategy that maximizes the counterpart detection probability, given a fixed investment of telescope time. We show how the prior assumption on the luminosity function of the electro-magnetic counterpart impacts the optimized followup strategy. Our results suggest that if the goal is to detect an EM counterpart from among a succession of GW triggers, the optimal strategy is to perform long integrations in the highest likelihood regions. For certain assumptions about source luminosity and mass distributions, we find that an optimal time investment that is proportional to the 2/3 power of the surface density of the GW location probability on the sky. In the future, this analysis framework will benefit significantly from the 3-dimensional localization probability.
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. Copyright © 2015 Elsevier B.V. All rights reserved.
Simple technique for maximal thoracic muscle harvest.
Marshall, M Blair; Kaiser, Larry R; Kucharczuk, John C
2004-04-01
We present a modification of technique for standard muscle flap harvest, the placement of cutaneous traction sutures. This technique allows for maximal dissection of the thoracic muscles even through minimal incisions. Through improved exposure and traction, complete dissection of the muscle bed can be performed and the tissue obtained maximized. Because more muscle bulk is obtained with this technique, the need for a second muscle may be prevented.
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Maximal Subgroups of Skew Linear Groups
M. Mahdavi-Hezavehi
2002-01-01
Let D be an infinite division algebra of finite dimension over its centre Z(D) = F, and n a positive integer. The structure of maximal subgroups of skew linear groups are investigated. In particular, assume N is a normal subgroup of GLn(D) and M is a maximal subgroup of N containing Z(N). It is shown that if M/Z(N) is finite, then N is central.
Additive Approximation Algorithms for Modularity Maximization
Kawase, Yasushi; Matsui, Tomomi; Miyauchi, Atsushi
2016-01-01
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph $G=(V,E)$, we are asked to find a partition $\\mathcal{C}$ of $V$ that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity max...
Computing a Clique Tree with the Algorithm Maximal Label Search
Anne Berry
2017-01-01
Full Text Available The algorithm MLS (Maximal Label Search is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS, Lexicographic Breadth-First Search (LexBFS, Lexicographic Depth-First Search (LexDFS and Maximal Neighborhood Search (MNS. On a chordal graph, MLS computes a PEO (perfect elimination ordering of the graph. We show how the algorithm MLS can be modified to compute a PMO (perfect moplex ordering, as well as a clique tree and the minimal separators of a chordal graph. We give a necessary and sufficient condition on the labeling structure of MLS for the beginning of a new clique in the clique tree to be detected by a condition on labels. MLS is also used to compute a clique tree of the complement graph, and new cliques in the complement graph can be detected by a condition on labels for any labeling structure. We provide a linear time algorithm computing a PMO and the corresponding generators of the maximal cliques and minimal separators of the complement graph. On a non-chordal graph, the algorithm MLSM, a graph search algorithm computing an MEO and a minimal triangulation of the graph, is used to compute an atom tree of the clique minimal separator decomposition of any graph.
A Unified Maximum Likelihood Approach to Document Retrieval.
Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex
2001-01-01
Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)
Profile likelihood maps of a 15-dimensional MSSM
Strege, C.; Bertone, G.; Besjes, G.J.; Caron, S.; Ruiz de Austri, R.; Strubig, A.; Trotta, R.
2014-01-01
We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking
Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny
For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...
A KULLBACK-LEIBLER EMPIRICAL LIKELIHOOD INFERENCE FOR CENSORED DATA
SHI Jian; Tai-Shing Lau
2002-01-01
In this paper, two kinds of Kullback-Leibler criteria with appropriate con straints are proposed to construct empirical likelihood confidence intervals for the mean of right censored data. It is shown that one of the criteria is equivalent to Adimari's (1997)procedure, and the other shares the same asymptotic behavior.
GPU accelerated likelihoods for stereo-based articulated tracking
Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny
2010-01-01
For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...
A KULLBACK—LEIBLER EMPIRICAL LIKELIHOOD INFERENCE FOR CENSORED DATA
SHIJian; Tai－ShingLan
2002-01-01
In this paper,two kinds of Kullback-Leibler criteria with appropriate constraints are proposed to construct empirical likelihood confidence intervals for the mean of right censored data.It is shown that one of the criteria is equivalent to Adimari's(1997) procedure,and the other shares the same asymptotic behavior.
Community Level Disadvantage and the Likelihood of First Ischemic Stroke
Bernadette Boden-Albala
2012-01-01
Full Text Available Background and Purpose. Residing in “disadvantaged” communities may increase morbidity and mortality independent of individual social resources and biological factors. This study evaluates the impact of population-level disadvantage on incident ischemic stroke likelihood in a multiethnic urban population. Methods. A population based case-control study was conducted in an ethnically diverse community of New York. First ischemic stroke cases and community controls were enrolled and a stroke risk assessment performed. Data regarding population level economic indicators for each census tract was assembled using geocoding. Census variables were also grouped together to define a broader measure of collective disadvantage. We evaluated the likelihood of stroke for population-level variables controlling for individual social (education, social isolation, and insurance and vascular risk factors. Results. We age-, sex-, and race-ethnicity-matched 687 incident ischemic stroke cases to 1153 community controls. The mean age was 69 years: 60% women; 22% white, 28% black, and 50% Hispanic. After adjustment, the index of community level disadvantage (OR 2.0, 95% CI 1.7–2.1 was associated with increased stroke likelihood overall and among all three race-ethnic groups. Conclusion. Social inequalities measured by census tract data including indices of community disadvantage confer a significant likelihood of ischemic stroke independent of conventional risk factors.
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Bias Correction for Alternating Iterative Maximum Likelihood Estimators
Gang YU; Wei GAO; Ningzhong SHI
2013-01-01
In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.
Maximum likelihood Jukes-Cantor triplets: analytic solutions.
Chor, Benny; Hendy, Michael D; Snir, Sagi
2006-03-01
Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.
A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods
Bijmolt, T.H.A.; Wedel, M.
1996-01-01
We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the
Maximum likelihood estimation of phase-type distributions
Esparza, Luz Judith R
This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...
Likelihood Inference for a Nonstationary Fractional Autoregressive Model
Johansen, Søren; Nielsen, Morten Ørregaard
This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...
Composite likelihood and two-stage estimation in family studies
Andersen, Elisabeth Anne Wreford
2004-01-01
In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...
Trimmed Likelihood-based Estimation in Binary Regression Models
Cizek, P.
2005-01-01
The binary-choice regression models such as probit and logit are typically estimated by the maximum likelihood method.To improve its robustness, various M-estimation based procedures were proposed, which however require bias corrections to achieve consistency and their resistance to outliers is rela
Planck 2013 results. XV. CMB power spectra and likelihood
Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-01-01
We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.
McNeill, Brian W.; Stoltenberg, Cal D.
1989-01-01
Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…
Maximum likelihood estimation of the attenuated ultrasound pulse
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...
Maximal Frequent Itemset Generation Using Segmentation Apporach
M.Rajalakshmi
2011-09-01
Full Text Available Finding frequent itemsets in a data source is a fundamental operation behind Association Rule Mining.Generally, many algorithms use either the bottom-up or top-down approaches for finding these frequentitemsets. When the length of frequent itemsets to be found is large, the traditional algorithms find all thefrequent itemsets from 1-length to n-length, which is a difficult process. This problem can be solved bymining only the Maximal Frequent Itemsets (MFS. Maximal Frequent Itemsets are frequent itemsets whichhave no proper frequent superset. Thus, the generation of only maximal frequent itemsets reduces thenumber of itemsets and also time needed for the generation of all frequent itemsets as each maximal itemsetof length m implies the presence of 2m-2 frequent itemsets. Furthermore, mining only maximal frequentitemset is sufficient in many data mining applications like minimal key discovery and theory extraction. Inthis paper, we suggest a novel method for finding the maximal frequent itemset from huge data sourcesusing the concept of segmentation of data source and prioritization of segments. Empirical evaluationshows that this method outperforms various other known methods.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Joan Garriga
Full Text Available The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i minimize the need of supervision, (ii reduce computational costs, (iii minimize the need of prior assumptions (e.g. simple parametrizations, and (iv capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC, a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC, a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering. Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
Predicting crash likelihood and severity on freeways with real-time loop detector data.
Xu, Chengcheng; Tarko, Andrew P; Wang, Wei; Liu, Pan
2013-08-01
Real-time crash risk prediction using traffic data collected from loop detector stations is useful in dynamic safety management systems aimed at improving traffic safety through application of proactive safety countermeasures. The major drawback of most of the existing studies is that they focus on the crash risk without consideration of crash severity. This paper presents an effort to develop a model that predicts the crash likelihood at different levels of severity with a particular focus on severe crashes. The crash data and traffic data used in this study were collected on the I-880 freeway in California, United States. This study considers three levels of crash severity: fatal/incapacitating injury crashes (KA), non-incapacitating/possible injury crashes (BC), and property-damage-only crashes (PDO). The sequential logit model was used to link the likelihood of crash occurrences at different severity levels to various traffic flow characteristics derived from detector data. The elasticity analysis was conducted to evaluate the effect of the traffic flow variables on the likelihood of crash and its severity.The results show that the traffic flow characteristics contributing to crash likelihood were quite different at different levels of severity. The PDO crashes were more likely to occur under congested traffic flow conditions with highly variable speed and frequent lane changes, while the KA and BC crashes were more likely to occur under less congested traffic flow conditions. High speed, coupled with a large speed difference between adjacent lanes under uncongested traffic conditions, was found to increase the likelihood of severe crashes (KA). This study applied the 20-fold cross-validation method to estimate the prediction performance of the developed models. The validation results show that the model's crash prediction performance at each severity level was satisfactory. The findings of this study can be used to predict the probabilities of crash at
张戈
2015-01-01
We studies the issue raised by Reference[3],according to appropriate assumptions and other smooth conditions,With a more simple method,Proved that asymptotic existence of quasi likelihood equations in Quasi-likelihood nonlinear model ,and proved the convergence rate of the solution.%在适当假定及其它一些光滑条件下,用更为简便的方法证明了拟似然非线性模型的拟似然方程解的渐近存在性,并且求出了该解收敛于真值的速度.
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials
Claus Vogl
2014-11-01
Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.
MLGA: A SAS Macro to Compute Maximum Likelihood Estimators via Genetic Algorithms
Francisco Juretig
2015-08-01
Full Text Available Nonlinear regression is usually implemented in SAS either by using PROC NLIN or PROC NLMIXED. Apart from the model structure, initial values need to be specified for each parameter. And after some convergence criteria are fulfilled, the second order conditions need to be analyzed. But numerical problems are expected to appear in case the likelihood is nearly discontinuous, has plateaus, multiple maxima, or the initial values are distant from the true parameter estimates. The usual solution consists of using a grid, and then choosing the set of parameters reporting the highest log-likelihood. However, if the amount of parameters or grid points is large, the computational burden will be excessive. Furthermore, there is no guarantee that, as the number of grid points increases, an equal or better set of points will be found. Genetic algorithms can overcome these problems by replicating how nature optimizes its processes. The MLGA macro is presented; it solves a maximum likelihood estimation problem under normality through PROC GA, and the resulting values can later be used as the starting values in SAS nonlinear procedures. As will be demonstrated, this macro can avoid the usual trial and error approach that is needed when convergence problems arise. Finally, it will be shown how this macro can deal with complicated restrictions involving multiple parameters.
Trujillo, B. M.
1986-01-01
This paper presents the technique and results of maximum likelihood estimation used to determine lift and drag characteristics of the Space Shuttle Orbiter. Maximum likelihood estimation uses measurable parameters to estimate nonmeasurable parameters. The nonmeasurable parameters for this case are elements of a nonlinear, dynamic model of the orbiter. The estimated parameters are used to evaluate a cost function that computes the differences between the measured and estimated longitudinal parameters. The case presented is a dynamic analysis. This places less restriction on pitching motion and can provide additional information about the orbiter such as lift and drag characteristics at conditions other than trim, instrument biases, and pitching moment characteristics. In addition, an output of the analysis is an estimate of the values for the individual components of lift and drag that contribute to the total lift and drag. The results show that maximum likelihood estimation is a useful tool for analysis of Space Shuttle Orbiter performance and is also applicable to parameter analysis of other types of aircraft.
1984-05-01
The disposal of radioactive wastes in deep geologic formations provides a means of isolating the waste from people until the radioactivity has decayed to safe levels. However, isolating people from the wastes is a different problem, since we do not know what the future condition of society will be. The Human Interference Task Force was convened by the US Department of Energy to determine whether reasonable means exist (or could be developed) to reduce the likelihood of future human unintentionally intruding on radioactive waste isolation systems. The task force concluded that significant reductions in the likelihood of human interference could be achieved, for perhaps thousands of years into the future, if appropriate steps are taken to communicate the existence of the repository. Consequently, for two years the task force directed most of its study toward the area of long-term communication. Methods are discussed for achieving long-term communication by using permanent markers and widely disseminated records, with various steps taken to provide multiple levels of protection against loss, destruction, and major language/societal changes. Also developed is the concept of a universal symbol to denote Caution - Biohazardous Waste Buried Here. If used for the thousands of non-radioactive biohazardous waste sites in this country alone, a symbol could transcend generations and language changes, thereby vastly improving the likelihood of successful isolation of all buried biohazardous wastes.
Welfare-maximizing and revenue-maximizing tariffs with a few domestic firms
Bruno Larue; Jean-Philippe Gervais
2002-01-01
In this paper we compare the orthodox optimal tariff formula with the appropriate welfare-maximizing tariff when there are a few producing or importing firms. The welfare-maximizing tariff can be very low, voire negative in some cases, while in others it can even exceed the maximum-revenue tariff. The relationship between the welfare-maximizing tariff and the number of firms need not be monotonically increasing, because the tariff is not strictly used to internalize terms of trade externality...
On entire f-maximal graphs in the Lorentzian product Gn ×R1
An, H. V. Q.; Cuong, D. V.; Duyen, N. T. M.; Hieu, D. T.; Nam, T. L.
2017-04-01
In the Lorentzian product Gn ×R1, we give a comparison theorem between the f-volume of an entire f-maximal graph and the f-volume of the hyperbolic Hr+ under the condition that the gradient of the function defining the graph is bounded away from 1. This condition comes from an example of non-planar entire f-maximal graph in Gn ×R1 and is equivalent to the hyperbolic angle function of the graph being bounded. As a consequence, we obtain a Calabi-Bernstein type theorem for f-maximal graphs in Gn ×R1.
Makram KRIT
2016-01-01
Full Text Available This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.
Acceleration of Expectation-Maximization algorithm for length-biased right-censored data.
Chan, Kwun Chuen Gary
2017-01-01
Vardi's Expectation-Maximization (EM) algorithm is frequently used for computing the nonparametric maximum likelihood estimator of length-biased right-censored data, which does not admit a closed-form representation. The EM algorithm may converge slowly, particularly for heavily censored data. We studied two algorithms for accelerating the convergence of the EM algorithm, based on iterative convex minorant and Aitken's delta squared process. Numerical simulations demonstrate that the acceleration algorithms converge more rapidly than the EM algorithm in terms of number of iterations and actual timing. The acceleration method based on a modification of Aitken's delta squared performed the best under a variety of settings.
Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li
2014-05-01
This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.
Maximizing production rates of the Linde Hampson machine
Maytal, B.-Z.
2006-01-01
In contrast to the ideal case of unlimited size recuperator, any real Linde-Hampson machine of finite size recuperator can be optimized to reach the extreme rates of performance. The group of cryocoolers sharing the same size recuperator is optimized in a closed form by determining the corresponding flow rate which maximizes its rate of cold production. For a similar group of liquefiers an optimal flow rate is derived to maximize the rate of production of liquid cryogen. The group of cryocoolers sharing a constant and given flow rate is optimized by shortening the recuperator for reaching a maximum compactness measured by the cooling power per unit size of the recuperator. The optimum conditions are developed for nitrogen and argon. The relevance of this analysis is discussed in the context of practice of fast cooldown Joule-Thomson cryocooling.
Maximizing Cloud Providers Revenues via Energy Aware Allocation Policies
Mazzucco, Michele; Deters, Ralph
2011-01-01
Cloud providers, like Amazon, offer their data centers' computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users' experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.
de Molière, Laura; Harris, Adam J L
2016-04-01
Previous research suggests that people systematically overestimate the occurrence of both positive and negative events, compared with neutral future events, and that these biases are due to a misattribution of arousal elicited by utility (stake-likelihood hypothesis; SLH; Vosgerau, 2010). However, extant research has provided only indirect support for these arousal misattribution processes. In the present research, we initially aimed to provide a direct test of the SLH by measuring arousal with galvanic skin responses to examine the mediating role of arousal. We observed no evidence that measured arousal mediated the impact of utility on probability estimates. Given the lack of direct support for the SLH in Experiment 1, Experiments 2-5 aimed to assess the SLH by replicating some of the original findings that provided support for arousal misattribution as a mechanism. Despite our best efforts to create experimental conditions under which we would be able to demonstrate the stake-likelihood effect, we were unable to replicate previous results, with a Bayesian meta-analysis demonstrating support for the null hypothesis. We propose that accounts based on imaginability and loss function asymmetry are currently better candidate explanations for the influence of outcome utility on probability estimates.
Parallel Likelihood Function Evaluation on Heterogeneous Many-core Systems
Jarp, Sverre; Leduc, Julien; Nowak, Andrzej; Sneen Lindal, Yngve
2011-01-01
This paper describes a parallel implementation that allows the evaluations of the likelihood function for data analysis methods to run cooperatively on heterogeneous computational devices (i.e. CPU and GPU) belonging to a single computational node. The implementation is able to split and balance the workload needed for the evaluation of the likelihood function in corresponding sub-workloads to be executed in parallel on each computational device. The CPU parallelization is implemented using OpenMP, while the GPU implementation is based on OpenCL. The comparison of the performance of these implementations for different configurations and different hardware systems are reported. Tests are based on a real data analysis carried out in the high energy physics community.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Measures of family resemblance for binary traits: likelihood based inference.
Shoukri, Mohamed M; ElDali, Abdelmoneim; Donner, Allan
2012-07-24
Detection and estimation of measures of familial aggregation is considered the first step to establish whether a certain disease has genetic component. Such measures are usually estimated from observational studies on siblings, parent-offspring, extended pedigrees or twins. When the trait of interest is quantitative (e.g. Blood pressures, body mass index, blood glucose levels, etc.) efficient likelihood estimation of such measures is feasible under the assumption of multivariate normality of the distributions of the traits. In this case the intra-class and inter-class correlations are used to assess the similarities among family members. When the trail is measured on the binary scale, we establish a full likelihood inference on such measures among siblings, parents, and parent-offspring. We illustrate the methodology on nuclear family data where the trait is the presence or absence of hypertension.
Applications of the Likelihood Theory in Finance: Modelling and Pricing
Janssen, Arnold
2012-01-01
This paper discusses the connection between mathematical finance and statistical modelling which turns out to be more than a formal mathematical correspondence. We like to figure out how common results and notions in statistics and their meaning can be translated to the world of mathematical finance and vice versa. A lot of similarities can be expressed in terms of LeCam's theory for statistical experiments which is the theory of the behaviour of likelihood processes. For positive prices the arbitrage free financial assets fit into filtered experiments. It is shown that they are given by filtered likelihood ratio processes. From the statistical point of view, martingale measures, completeness and pricing formulas are revisited. The pricing formulas for various options are connected with the power functions of tests. For instance the Black-Scholes price of a European option has an interpretation as Bayes risk of a Neyman Pearson test. Under contiguity the convergence of financial experiments and option prices ...
GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS
S. Sridevi
2013-02-01
Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
Gaussian maximum likelihood and contextual classification algorithms for multicrop classification
Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.
1987-01-01
The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.
A Weighted Likelihood Ratio of Two Related Negative Hypergeomeric Distributions
Titi Obilade
2004-01-01
In this paper we consider some related negative hypergeometric distributions arising from the problem of sampling without replacement from an urn containing balls of different colours and in different proportions but stopping only after some specifi number of balls of different colours have been obtained.With the aid of some simple recurrence relations and identities we obtain in the case of two colours the moments for the maximum negative hypergeometric distribution,the minimum negative hypergeometric distribution,the likelihood ratio negative hypergeometric distribution and consequently the likelihood proportional negative hypergeometric distributiuon.To the extent that the sampling scheme is applicable to modelling data as illustrated with a biological example and in fact many situations of estimating Bernoulli parameters for binary traits within afinite population,these are important first-step results.
A model independent safeguard for unbinned Profile Likelihood
Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny
2016-01-01
We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models. We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Maximum likelihood method and Fisher's information in physics and econophysics
Syska, Jacek
2012-01-01
Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...
Polyploidy Induction of Pteroceltis tatarinowii Maxim
Lin ZHANG; Feng WANG; Zhongkui SUN; Cuicui ZHU; Rongwei CHEN
2015-01-01
3%Objective] This study was conducted to obtain tetraploid Pteroceltis tatari-nowi Maxim. with excel ent ornamental traits. [Method] The stem apex growing points of Pteroceltis tatarinowi Maxim. were treated with different concentrations of colchicine solution for different hours to figure out a proper method and obtain poly-ploids. [Result] The most effective induction was obtained by treatment with 0.6%-0.8% colchicine for 72 h with 34.2% mutation rate. Flow cytometry and chromosome observation of the stem apex growing point of P. tatarinowi Maxim. proved that the tetraploid plants were successful y obtained with chromosome number 2n=4x=36. [Conclusion] The result not only fil s the blank of polyploid breeding of P. tatarinowi , but also provides an effective way to broaden the methods of cultivation of fast-growing, high-quality, disease-resilience, new varieties of Pteroceltis.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
The maximal process of nonlinear shot noise
Eliazar, Iddo; Klafter, Joseph
2009-05-01
In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.
Energy Band Calculations for Maximally Even Superlattices
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
Maximum Likelihood Under Response Biased Sampling\\ud
Chambers, Raymond; Dorfman, Alan; Wang, Suojin
2003-01-01
Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...
Likelihood-based inference for clustered line transect data
Waagepetersen, Rasmus Plenge; Schweder, Tore
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... in an example concerning minke whales in the North Atlantic. Our modelling and computational approach is flexible but demanding in terms of computing time....
Forecasting New Product Sales from Likelihood of Purchase Ratings
William J. Infosino
1986-01-01
This paper compares consumer likelihood of purchase ratings for a proposed new product to their actual purchase behavior after the product was introduced. The ratings were obtained from a mail survey a few weeks before the product was introduced. The analysis leads to a model for forecasting new product sales. The model is supported by both empirical evidence and a reasonable theoretical foundation. In addition to calibrating the relationship between questionnaire ratings and actual purchases...
Australian food life style segments and elaboration likelihood differences
Brunsø, Karen; Reid, Mike
As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... insights into cross-cultural similarities and differences, into elaboration likelihood differences among consumer segments, and show how the involvement construct may be used as basis for communication development....
Penalized maximum likelihood estimation for generalized linear point processes
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels
2015-01-01
The space-time whitened matched filter (ST-WMF) maximum likelihood sequence detection (MLSD) architecture has been recently proposed (Maggio et al., 2014). Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i) drastically reduces the number of states of the Viterbi decoder (VD) and (ii) offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We...
Influence functions of trimmed likelihood estimators for lifetime experiments
2015-01-01
We provide a general approach for deriving the influence function for trimmed likelihood estimators using the implicit function theorem. The approach is applied to lifetime models with exponential or lognormal distributions possessing a linear or nonlinear link function. A side result is that the functional form of the trimmed estimator for location and linear regression used by Bednarski and Clarke (1993, 2002) and Bednarski et al. (2010) is not generally always the correct fu...
Absence of parasympathetic reactivation after maximal exercise.
de Oliveira, Tiago Peçanha; de Alvarenga Mattos, Raphael; da Silva, Rhenan Bartels Ferreira; Rezende, Rafael Andrade; de Lima, Jorge Roberto Perrout
2013-03-01
The ability of the human organism to recover its autonomic balance soon after physical exercise cessation has an important impact on the individual's health status. Although the dynamics of heart rate recovery after maximal exercise has been studied, little is known about heart rate variability after this type of exercise. The aim of this study is to analyse the dynamics of heart rate and heart rate variability recovery after maximal exercise in healthy young men. Fifteen healthy male subjects (21·7 ± 3·4 years; 24·0 ± 2·1 kg m(-2) ) participated in the study. The experimental protocol consisted of an incremental maximal exercise test on a cycle ergometer, until maximal voluntary exhaustion. After the test, recovery R-R intervals were recorded for 5 min. From the absolute differences between peak heart rate values and the heart rate values at 1 and 5 min of the recovery, the heart rate recovery was calculated. Postexercise heart rate variability was analysed from calculations of the SDNN and RMSSD indexes, in 30-s windows (SDNN(30s) and RMSSD(30s) ) throughout recovery. One and 5 min after maximal exercise cessation, the heart rate recovered 34·7 (±6·6) and 75·5 (±6·1) bpm, respectively. With regard to HRV recovery, while the SDNN(30s) index had a slight increase, RMSSD(30s) index remained totally suppressed throughout the recovery, suggesting an absence of vagal modulation reactivation and, possibly, a discrete sympathetic withdrawal. Therefore, it is possible that the main mechanism associated with the fall of HR after maximal exercise is sympathetic withdrawal or a vagal tone restoration without vagal modulation recovery. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
Fertilization response likelihood for the interpretation of leaf analyses
Celsemy Eleutério Maia
2012-04-01
Full Text Available Leaf analysis is the chemical evaluation of the nutritional status where the nutrient concentrations found in the tissue reflect the nutritional status of the plants. Thus, a correct interpretation of the results of leaf analysis is fundamental for an effective use of this tool. The purpose of this study was to propose and compare the method of Fertilization Response Likelihood (FRL for interpretation of leaf analysis with that of the Diagnosis and Recommendation Integrated System (DRIS. The database consisted of 157 analyses of the N, P, K, Ca, Mg, S, Cu, Fe, Mn, Zn, and B concentrations in coffee leaves, which were divided into two groups: low yield ( 30 bags ha-1. The DRIS indices were calculated using the method proposed by Jones (1981. The fertilization response likelihood was computed based on the approximation of normal distribution. It was found that the Fertilization Response Likelihood (FRL allowed an evaluation of the nutritional status of coffee trees, coinciding with the DRIS-based diagnoses in 84.96 % of the crops.
CMB likelihood approximation by a Gaussianized Blackwell-Rao estimator
Rudjord, Ø; Eriksen, H K; Huey, Greg; Górski, K M; Jewell, J B
2008-01-01
We introduce a new CMB temperature likelihood approximation called the Gaussianized Blackwell-Rao (GBR) estimator. This estimator is derived by transforming the observed marginal power spectrum distributions obtained by the CMB Gibbs sampler into standard univariate Gaussians, and then approximate their joint transformed distribution by a multivariate Gaussian. The method is exact for full-sky coverage and uniform noise, and an excellent approximation for sky cuts and scanning patterns relevant for modern satellite experiments such as WMAP and Planck. A single evaluation of this estimator between l=2 and 200 takes ~0.2 CPU milliseconds, while for comparison, a single pixel space likelihood evaluation between l=2 and 30 for a map with ~2500 pixels requires ~20 seconds. We apply this tool to the 5-year WMAP temperature data, and re-estimate the angular temperature power spectrum, $C_{\\ell}$, and likelihood, L(C_l), for l<=200, and derive new cosmological parameters for the standard six-parameter LambdaCDM mo...
Local solutions of Maximum Likelihood Estimation in Quantum State Tomography
Gonçalves, Douglas S; Lavor, Carlile; Farías, Osvaldo Jiménez; Ribeiro, P H Souto
2011-01-01
Maximum likelihood estimation is one of the most used methods in quantum state tomography, where the aim is to find the best density matrix for the description of a physical system. Results of measurements on the system should match the expected values produced by the density matrix. In some cases however, if the matrix is parameterized to ensure positivity and unit trace, the negative log-likelihood function may have several local minima. In several papers in the field, authors associate a source of errors to the possibility that most of these local minima are not global, so that optimization methods can be trapped in the wrong minimum, leading to a wrong density matrix. Here we show that, for convex negative log-likelihood functions, all local minima are global. We also show that a practical source of errors is in fact the use of optimization methods that do not have global convergence property or present numerical instabilities. The clarification of this point has important repercussion on quantum informat...
Accurate determination of phase arrival times using autoregressive likelihood estimation
G. Kvaerna
1994-06-01
Full Text Available We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation of about 0.05 s for P-phases and 0.15 0.20 s for S phases. These accuracies are as good as for analyst picks, and are considerably better than the accuracies of the current onset procedure used for processing of regional array data at NORSAR. In another application, we have developed a generic procedure to reestimate the onsets of all types of first arriving P phases. By again applying the autoregressive likelihood technique, we have obtained automatic onset times of a quality such that 70% of the automatic picks are within 0.1 s of the best manual pick. For the onset time procedure currently used at NORSAR, the corresponding number is 28%. Clearly, automatic reestimation of first arriving P onsets using the autoregressive likelihood technique has the potential of significantly reducing the retiming efforts of the analyst.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Maximizing band gaps in plate structures
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated...
Maximal and Minimal Congruences on Some Semigroups
Jintana SANWONG; Boorapa SINGHA; R.P.SULLIVAN
2009-01-01
In 2006,Sanwong and Sullivan described the maximal congruences on the semigroup N consisting of all non-negative integers under standard multiplication,and on the semigroup T(X) consisting of all total transformations of an infinite set X under composition. Here,we determine all maximal congruences on the semigroup Zn under multiplication modulo n. And,when Y X,we do the same for the semigroup T(X,Y) consisting of all elements of T(X) whose range is contained in Y. We also characterise the minimal congruences on T(X,Y).
Maximal Inequalities for Dependent Random Variables
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X...
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Generalized Empirical Likelihood Inference in Semiparametric Regression Model for Longitudinal Data
Gao Rong LI; Ping TIAN; Liu Gen XUE
2008-01-01
In this paper, we consider the semiparametric regression model for longitudinal data. Due to the correlation within groups, a generalized empirical log-likelihood ratio statistic for the unknown parameters in the model is suggested by introducing the working covariance matrix. It is proved that the proposed statistic is asymptotically standard chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. A simulation study is conducted to compare the proposed method with the generalized least squares method in terms of coverage accuracy and average lengths of the confidence intervals.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Empirical likelihood confidence regions of the parameters in a partially linear single-index model
XUE Liugen; ZHU Lixing
2005-01-01
In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters. Our methods can also deal with the confidence region construction for the index in the pure single-index model. A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method.
Comparisons of likelihood and machine learning methods of individual classification
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
2002-01-01
Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of
Communicating likelihoods and probabilities in forecasts of volcanic eruptions
Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas
2014-02-01
The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and