WorldWideScience

Sample records for approximately maximum-likelihood trees

  1. Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization

    OpenAIRE

    Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue

    2010-01-01

    This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...

  2. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    Science.gov (United States)

    Lui, Kenneth W. K.; So, H. C.

    2009-12-01

    We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.

  3. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Kenneth W. K. Lui

    2009-01-01

    Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.

  4. Approximated maximum likelihood estimation in multifractal random walks

    CERN Document Server

    Løvsletten, Ola

    2011-01-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry et al., Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  5. AN EFFICIENT APPROXIMATE MAXIMUM LIKELIHOOD SIGNAL DETECTION FOR MIMO SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Cao Xuehong

    2007-01-01

    This paper proposes an efficient approximate Maximum Likelihood (ML) detection method for Multiple-Input Multiple-Output (MIMO) systems, which searches local area instead of exhaustive search and selects valid search points in each transmit antenna signal constellation instead of all hyperplane. Both of the selection and search complexity can be reduced significantly. The method performs the tradeoff between computational complexity and system performance by adjusting the neighborhood size to select the valid search points. Simulation results show that the performance is comparable to that of the ML detection while the complexity is only as the small fraction of ML.

  6. Approximate Maximum Likelihood Commercial Bank Loan Management Model

    Directory of Open Access Journals (Sweden)

    Godwin N.O.   Asemota

    2009-01-01

    Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.

  7. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    OpenAIRE

    2009-01-01

    We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...

  8. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  9. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  10. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    Directory of Open Access Journals (Sweden)

    Kodner Robin B

    2010-10-01

    Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.

  11. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  12. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  13. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  14. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  15. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  16. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  17. Parameter estimation in X-ray astronomy using maximum likelihood

    Science.gov (United States)

    Wachter, K.; Leach, R.; Kellogg, E.

    1979-01-01

    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  18. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  19. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  20. Regions of constrained maximum likelihood parameter identifiability

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1975-01-01

    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  1. Penalized maximum likelihood estimation and variable selection in geostatistics

    CERN Document Server

    Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919

    2012-01-01

    We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...

  2. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  3. Maximum-likelihood fits to histograms for improved parameter estimation

    CERN Document Server

    Fowler, Joseph W

    2013-01-01

    Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  4. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  5. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  6. Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.

    Science.gov (United States)

    Chor, Benny; Snir, Sagi

    2004-12-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.

  7. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  8. Tree wavelet approximations with applications

    Institute of Scientific and Technical Information of China (English)

    XU Yuesheng; ZOU Qingsong

    2005-01-01

    We construct a tree wavelet approximation by using a constructive greedy scheme(CGS). We define a function class which contains the functions whose piecewise polynomial approximations generated by the CGS have a prescribed global convergence rate and establish embedding properties of this class. We provide sufficient conditions on a tree index set and on bi-orthogonal wavelet bases which ensure optimal order of convergence for the wavelet approximations encoded on the tree index set using the bi-orthogonal wavelet bases. We then show that if we use the tree index set associated with the partition generated by the CGS to encode a wavelet approximation, it gives optimal order of convergence.

  9. On approximate equivalence of the generalized least squares estimate and the maximum likelihood estimate in a growth curve model%生长曲线模型中最小二乘估计与极大似然估计的近似等价性

    Institute of Scientific and Technical Information of China (English)

    王理同

    2012-01-01

    在生长曲线模型中,参数矩阵的最小二乘估计为响应变量的线性函数,而极大似然估计为响应变量的非线性函数,所以极大似然估计的统计推断比较复杂.为了使它的统计推断简单点,一些学者考虑了极大似然估计与最小二乘估计的等价性.不幸的是极大似然估计与最小二乘估计的完全等价性不易满足.因此考虑它们的近似等价性,即考虑它们基于欧式范数标准下的模长之比.如果比值在任意给定的允许误差之内,就认为极大似然估计近似等价于最小二乘估计,从而简化极大似然估计的统计推断.%In a growth curve model,the generalized least squares estimator of the parameter matrix is a linear function of the response variables while its maximum likelihood estimator is nonlinear, so the statistical inference based on the maximum likelihood estimate might be more complicated. In order to make its statistical inference more easily analytical and tractable to obtain, some authors concern conditions under which the maximum likelihood estimator is completely equivalent to the generalized least squares estimator. Unfortunately, such conditions are very parsimonious. Therefore, an asymptotical equivalence between them is suggested, that is, consider the ratio of two covariance matrices concerned based on Euclidean norm. It is believed that the maximum likelihood estimator approximates the generalized least squares estimator if the ratio between them is limited to the permitted errors, and then the statistical inference of the maximum likelihood estimator is simplified.

  10. On the Performance of Maximum Likelihood Inverse Reinforcement Learning

    CERN Document Server

    Ratia, Héctor; Martinez-Cantin, Ruben

    2012-01-01

    Inverse reinforcement learning (IRL) addresses the problem of recovering a task description given a demonstration of the optimal policy used to solve such a task. The optimal policy is usually provided by an expert or teacher, making IRL specially suitable for the problem of apprenticeship learning. The task description is encoded in the form of a reward function of a Markov decision process (MDP). Several algorithms have been proposed to find the reward function corresponding to a set of demonstrations. One of the algorithms that has provided best results in different applications is a gradient method to optimize a policy squared error criterion. On a parallel line of research, other authors have presented recently a gradient approximation of the maximum likelihood estimate of the reward signal. In general, both approaches approximate the gradient estimate and the criteria at different stages to make the algorithm tractable and efficient. In this work, we provide a detailed description of the different metho...

  11. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  12. Study on the Hungarian algorithm for the maximum likelihood data association problem

    Institute of Scientific and Technical Information of China (English)

    Wang Jianguo; He Peikun; Cao Wei

    2007-01-01

    A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the na(i)ve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.

  13. Maximum Likelihood Factor Structure of the Family Environment Scale.

    Science.gov (United States)

    Fowler, Patrick C.

    1981-01-01

    Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)

  14. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  15. Maximum likelihood tuning of a vehicle motion filter

    Science.gov (United States)

    Trankle, Thomas L.; Rabin, Uri H.

    1990-01-01

    This paper describes the use of maximum likelihood parameter estimation unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the likelihood functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the likelihood function of the data in order to estimate the unknown parameters. The parameter estimation algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.

  16. Blind Detection of Ultra-faint Streaks with a Maximum Likelihood Method

    CERN Document Server

    Dawson, William A; Kamath, Chandrika

    2016-01-01

    We have developed a maximum likelihood source detection method capable of detecting ultra-faint streaks with surface brightnesses approximately an order of magnitude fainter than the pixel level noise. Our maximum likelihood detection method is a model based approach that requires no a priori knowledge about the streak location, orientation, length, or surface brightness. This method enables discovery of typically undiscovered objects, and enables the utilization of low-cost sensors (i.e., higher-noise data). The method also easily facilitates multi-epoch co-addition. We will present the results from the application of this method to simulations, as well as real low earth orbit observations.

  17. Modified maximum likelihood registration based on information fusion

    Institute of Scientific and Technical Information of China (English)

    Yongqing Qi; Zhongliang Jing; Shiqiang Hu

    2007-01-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  18. Semiparametric maximum likelihood for nonlinear regression with measurement errors.

    Science.gov (United States)

    Suh, Eun-Young; Schafer, Daniel W

    2002-06-01

    This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.

  19. Maximum-likelihood estimation of haplotype frequencies in nuclear families.

    Science.gov (United States)

    Becker, Tim; Knapp, Michael

    2004-07-01

    The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.

  20. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  1. Bias Correction for Alternating Iterative Maximum Likelihood Estimators

    Institute of Scientific and Technical Information of China (English)

    Gang YU; Wei GAO; Ningzhong SHI

    2013-01-01

    In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.

  2. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...

  3. A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    1996-01-01

    We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the

  4. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    Science.gov (United States)

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  5. A Unified Maximum Likelihood Approach to Document Retrieval.

    Science.gov (United States)

    Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex

    2001-01-01

    Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)

  6. Heteroscedastic one-factor models and marginal maximum likelihood estimation

    NARCIS (Netherlands)

    Hessen, D.J.; Dolan, C.V.

    2009-01-01

    In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati

  7. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    1994-01-01

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est

  8. Tree wavelet approximations with applications

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    [1]Baraniuk, R. G., DeVore, R. A., Kyriazis, G., Yu, X. M., Near best tree approximation, Adv. Comput. Math.,2002, 16: 357-373.[2]Cohen, A., Dahmen, W., Daubechies, I., DeVore, R., Tree approximation and optimal encoding, Appl. Comput.Harmonic Anal., 2001, 11: 192-226.[3]Dahmen, W., Schneider, R., Xu, Y., Nonlinear functionals of wavelet expansions-adaptive reconstruction and fast evaluation, Numer. Math., 2000, 86: 49-101.[4]DeVore, R. A., Nonlinear approximation, Acta Numer., 1998, 7: 51-150.[5]Davis, G., Mallat, S., Avellaneda, M., Adaptive greedy approximations, Const. Approx., 1997, 13: 57-98.[6]DeVore, R. A., Temlyakov, V. N., Some remarks on greedy algorithms, Adv. Comput. Math., 1996, 5: 173-187.[7]Kashin, B. S., Temlyakov, V. N., Best m-term approximations and the entropy of sets in the space L1, Mat.Zametki (in Russian), 1994, 56: 57-86.[8]Temlyakov, V. N., The best m-term approximation and greedy algorithms, Adv. Comput. Math., 1998, 8:249-265.[9]Temlyakov, V. N., Greedy algorithm and m-term trigonometric approximation, Constr. Approx., 1998, 14:569-587.[10]Hutchinson, J. E., Fractals and self similarity, Indiana. Univ. Math. J., 1981, 30: 713-747.[11]Binev, P., Dahmen, W., DeVore, R. A., Petruchev, P., Approximation classes for adaptive methods, Serdica Math.J., 2002, 28: 1001-1026.[12]Gilbarg, D., Trudinger, N. S., Elliptic Partial Differential Equations of Second Order, Berlin: Springer-Verlag,1983.[13]Ciarlet, P. G., The Finite Element Method for Elliptic Problems, New York: North Holland, 1978.[14]Birman, M. S., Solomiak, M. Z., Piecewise polynomial approximation of functions of the class Wαp, Math. Sb.,1967, 73: 295-317.[15]DeVore, R. A., Lorentz, G. G., Constructive Approximation, New York: Springer-Verlag, 1993.[16]DeVore, R. A., Popov, V., Interpolation of Besov spaces, Trans. Amer. Math. Soc., 1988, 305: 397-414.[17]Devore, R., Jawerth, B., Popov, V., Compression of wavelet decompositions, Amer. J. Math., 1992, 114: 737-785.[18]Storozhenko, E

  9. The Multivariate Watson Distribution: Maximum-Likelihood Estimation and other Aspects

    CERN Document Server

    Sra, Suvrit

    2011-01-01

    This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where $\\pm \\x$ are equivalent), for high-dimensions using them can be difficult. Why so? Largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived---but these are either grossly inaccurate in high-dimensions (\\emph{Directional Statistics}, Mardia & Jupp. 2000) or when reasonably accurate (\\emph{J. Machine Learning Research, W. & C.P., v2}, Bijral \\emph{et al.}, 2007, pp. 35--42), they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover...

  10. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  11. $\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs

    CERN Document Server

    van de Geer, Sara

    2012-01-01

    We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.

  12. Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs

    CERN Document Server

    Desjardins, Guillaume; Bengio, Yoshua

    2010-01-01

    Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...

  13. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  14. Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels

    OpenAIRE

    2015-01-01

    The space-time whitened matched filter (ST-WMF) maximum likelihood sequence detection (MLSD) architecture has been recently proposed (Maggio et al., 2014). Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i) drastically reduces the number of states of the Viterbi decoder (VD) and (ii) offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We...

  15. Maximum-likelihood estimation prevents unphysical Mueller matrices

    CERN Document Server

    Aiello, A; Voigt, D; Woerdman, J P

    2005-01-01

    We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.

  16. Maximum Likelihood Under Response Biased Sampling\\ud

    OpenAIRE

    Chambers, Raymond; Dorfman, Alan; Wang, Suojin

    2003-01-01

    Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...

  17. Microarray background correction: maximum likelihood estimation for the normal-exponential convolution

    DEFF Research Database (Denmark)

    Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K

    2009-01-01

    is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data....... The saddle-point approximation is an adequate replacement in most practical situations. The performance of normexp for assessing differential expression is improved by adding a small offset to the corrected intensities....

  18. SPDE Approximation for Random Trees

    CERN Document Server

    Bakhtin, Yuri

    2009-01-01

    We consider the genealogy tree for a critical branching process conditioned on non-extinction. We enumerate vertices in each generation of the tree so that for each two generations one can define a monotone map describing the ancestor--descendant relation between their vertices. We show that under appropriate rescaling this family of monotone maps converges in distribution in a special topology to a limiting flow of discontinuous monotone maps which can be seen as a continuum tree. This flow is a solution of an SPDE with respect to a Brownian sheet.

  19. Maximum-likelihood analysis of the COBE angular correlation function

    Science.gov (United States)

    Seljak, Uros; Bertschinger, Edmund

    1993-01-01

    We have used maximum-likelihood estimation to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 estimate of Smoot et al.

  20. Maximum likelihood characterization of rotationally symmetric distributions on the sphere

    OpenAIRE

    Duerinckx, Mitia; Ley, Christophe

    2012-01-01

    A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...

  1. Efficient maximum likelihood parameterization of continuous-time Markov processes

    CERN Document Server

    McGibbon, Robert T

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.

  2. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  3. Maximum Likelihood Localization of Radiation Sources with unknown Source Intensity

    CERN Document Server

    Baidoo-Williams, Henry E

    2016-01-01

    In this paper, we consider a novel and robust maximum likelihood approach to localizing radiation sources with unknown statistics of the source signal strength. The result utilizes the smallest number of sensors required theoretically to localize the source. It is shown, that should the source lie in the open convex hull of the sensors, precisely $N+1$ are required in $\\mathbb{R}^N, ~N \\in \\{1,\\cdots,3\\}$. It is further shown that the region of interest, the open convex hull of the sensors, is entirely devoid of false stationary points. An augmented gradient ascent algorithm with random projections should an estimate escape the convex hull is presented.

  4. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.

  5. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    Science.gov (United States)

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  6. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  7. Maximum likelihood identification of aircraft stability and control derivatives

    Science.gov (United States)

    Mehra, R. K.; Stepner, D. E.; Tyler, J. S.

    1974-01-01

    Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.

  8. Penalized maximum likelihood estimation for generalized linear point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....

  9. Analytical maximum likelihood estimation of stellar magnetic fields

    CERN Document Server

    González, M J Martínez; Ramos, A Asensio; Belluzzi, L

    2011-01-01

    The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.

  10. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    Science.gov (United States)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  11. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    Science.gov (United States)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  12. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    CERN Document Server

    Agnese, R; Balakishiyeva, D; Thakur, R Basu; Bauer, D A; Billard, J; Borgland, A; Bowles, M A; Brandt, D; Brink, P L; Bunker, R; Cabrera, B; Caldwell, D O; Cerdeno, D G; Chagani, H; Chen, Y; Cooley, J; Cornell, B; Crewdson, C H; Cushman, P; Daal, M; Di Stefano, P C F; Doughty, T; Esteban, L; Fallows, S; Figueroa-Feliciano, E; Fritts, M; Godfrey, G L; Golwala, S R; Graham, M; Hall, J; Harris, H R; Hertel, S A; Hofer, T; Holmgren, D; Hsu, L; Huber, M E; Jastram, A; Kamaev, O; Kara, B; Kelsey, M H; Kennedy, A; Kiveni, M; Koch, K; Leder, A; Loer, B; Asamar, E Lopez; Mahapatra, R; Mandic, V; Martinez, C; McCarthy, K A; Mirabolfathi, N; Moffatt, R A; Moore, D C; Nelson, R H; Oser, S M; Page, K; Page, W A; Partridge, R; Pepin, M; Phipps, A; Prasad, K; Pyle, M; Qiu, H; Rau, W; Redl, P; Reisetter, A; Ricci, Y; Rogers, H E; Saab, T; Sadoulet, B; Sander, J; Schneck, K; Schnee, R W; Scorza, S; Serfass, B; Shank, B; Speller, D; Upadhyayula, S; Villano, A N; Welliver, B; Wright, D H; Yellin, S; Yen, J J; Young, B A; Zhang, J

    2014-01-01

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  13. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  14. Maximum likelihood method and Fisher's information in physics and econophysics

    CERN Document Server

    Syska, Jacek

    2012-01-01

    Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...

  15. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    Science.gov (United States)

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  16. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  17. Local solutions of Maximum Likelihood Estimation in Quantum State Tomography

    CERN Document Server

    Gonçalves, Douglas S; Lavor, Carlile; Farías, Osvaldo Jiménez; Ribeiro, P H Souto

    2011-01-01

    Maximum likelihood estimation is one of the most used methods in quantum state tomography, where the aim is to find the best density matrix for the description of a physical system. Results of measurements on the system should match the expected values produced by the density matrix. In some cases however, if the matrix is parameterized to ensure positivity and unit trace, the negative log-likelihood function may have several local minima. In several papers in the field, authors associate a source of errors to the possibility that most of these local minima are not global, so that optimization methods can be trapped in the wrong minimum, leading to a wrong density matrix. Here we show that, for convex negative log-likelihood functions, all local minima are global. We also show that a practical source of errors is in fact the use of optimization methods that do not have global convergence property or present numerical instabilities. The clarification of this point has important repercussion on quantum informat...

  18. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  19. A Maximum Likelihood Approach to Least Absolute Deviation Regression

    Directory of Open Access Journals (Sweden)

    Yinbo Li

    2004-09-01

    Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.

  20. Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels

    Directory of Open Access Journals (Sweden)

    Gabriel N. Maggio

    2015-01-01

    Full Text Available The space-time whitened matched filter (ST-WMF maximum likelihood sequence detection (MLSD architecture has been recently proposed (Maggio et al., 2014. Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i drastically reduces the number of states of the Viterbi decoder (VD and (ii offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We show that the space compression of the nonlinear channel is an instrumental property of the ST-WMF-MLSD which results in a major reduction of the implementation complexity in intensity modulation and direct detection (IM/DD fiber optic systems. Moreover, we assess the performance of ST-WMF-MLSD in IM/DD optical systems with chromatic dispersion (CD and polarization mode dispersion (PMD. Numerical results for a 10 Gb/s, 700 km, and IM/DD fiber-optic link with 50 ps differential group delay (DGD show that the number of states of the VD in ST-WMF-MLSD can be reduced ~4 times compared to an oversampled MLSD. Finally, we analyze the impact of the imperfect channel estimation on the performance of the ST-WMF-MLSD. Our results show that the performance degradation caused by channel estimation inaccuracies is low and similar to that achieved by existing MLSD schemes (~0.2 dB.

  1. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  2. The Multi-Mission Maximum Likelihood framework (3ML)

    CERN Document Server

    Vianello, Giacomo; Younk, Patrick; Tibaldo, Luigi; Burgess, James M; Ayala, Hugo; Harding, Patrick; Hui, Michelle; Omodei, Nicola; Zhou, Hao

    2015-01-01

    Astrophysical sources are now observed by many different instruments at different wavelengths, from radio to high-energy gamma-rays, with an unprecedented quality. Putting all these data together to form a coherent view, however, is a very difficult task. Each instrument has its own data format, software and analysis procedure, which are difficult to combine. It is for example very challenging to perform a broadband fit of the energy spectrum of the source. The Multi-Mission Maximum Likelihood framework (3ML) aims to solve this issue, providing a common framework which allows for a coherent modeling of sources using all the available data, independent of their origin. At the same time, thanks to its architecture based on plug-ins, 3ML uses the existing official software of each instrument for the corresponding data in a way which is transparent to the user. 3ML is based on the likelihood formalism, in which a model summarizing our knowledge about a particular region of the sky is convolved with the instrument...

  3. Maximum likelihood based classification of electron tomographic data.

    Science.gov (United States)

    Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan

    2011-01-01

    Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.

  4. Maximum likelihood pedigree reconstruction using integer linear programming.

    Science.gov (United States)

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible.

  5. tmle : An R Package for Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Susan Gruber

    2012-11-01

    Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.

  6. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  7. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    Full Text Available The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  8. Maximum-likelihood, self-consistent side chain free energies with applications to protein molecular dynamics

    CERN Document Server

    Jumper, John M; Sosnick, Tobin R

    2016-01-01

    To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a new methodology that computes a self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom, the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly; hence, the protein backbone dynamics traverse a greatly smoothed energetic landscape, resulting in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood type method to parameterize the side chain model using...

  9. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    Science.gov (United States)

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  10. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    Science.gov (United States)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  11. Maximum likelihood estimation of the broken power law spectral parameters with detector design applications

    CERN Document Server

    Howell, L W

    2002-01-01

    The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate bro...

  12. On the existence of maximum likelihood estimates for presence-only data

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    Presence-only data can be used to determine resource selection and estimate a species’ distribution. Maximum likelihood is a common parameter estimation method used for species distribution models. Maximum likelihood estimates, however, do not always exist for a commonly used species distribution model – the Poisson point process.

  13. A viable method for goodness-of-fit test in maximum likelihood fit

    Institute of Scientific and Technical Information of China (English)

    张锋; 高原宁; 霍雷

    2011-01-01

    A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood func

  14. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Maximum Likelihood Implementation of an Isolation-with-Migration Model for Three Species.

    Science.gov (United States)

    Dalquen, Daniel A; Zhu, Tianqi; Yang, Ziheng

    2016-08-02

    We develop a maximum likelihood (ML) method for estimating migration rates between species using genomic sequence data. A species tree is used to accommodate the phylogenetic relationships among three species, allowing for migration between the two sister species, while the third species is used as an out-group. A Markov chain characterization of the genealogical process of coalescence and migration is used to integrate out the migration histories at each locus analytically, whereas Gaussian quadrature is used to integrate over the coalescent times on each genealogical tree numerically. This is an extension of our early implementation of the symmetrical isolation-with-migration model for three species to accommodate arbitrary loci with two or three sequences per locus and to allow asymmetrical migration rates. Our implementation can accommodate tens of thousands of loci, making it feasible to analyze genome-scale data sets to test for gene flow. We calculate the posterior probabilities of gene trees at individual loci to identify genomic regions that are likely to have been transferred between species due to gene flow. We conduct a simulation study to examine the statistical properties of the likelihood ratio test for gene flow between the two in-group species and of the ML estimates of model parameters such as the migration rate. Inclusion of data from a third out-group species is found to increase dramatically the power of the test and the precision of parameter estimation. We compiled and analyzed several genomic data sets from the Drosophila fruit flies. Our analyses suggest no migration from D. melanogaster to D. simulans, and a significant amount of gene flow from D. simulans to D. melanogaster, at the rate of [Formula: see text] migrant individuals per generation. We discuss the utility of the multispecies coalescent model for species tree estimation, accounting for incomplete lineage sorting and migration.

  16. Maximum likelihood reconstruction for Ising models with asynchronous updates

    CERN Document Server

    Zeng, Hong-Li; Aurell, Erik; Hertz, John; Roudi, Yasser

    2012-01-01

    We describe how the couplings in a non-equilibrium Ising model can be inferred from observing the model history. Two cases of an asynchronous update scheme are considered: one in which we know both the spin history and the update times (times at which an attempt was made to flip a spin) and one in which we only know the spin history (i.e., the times at which spins were actually flipped). In both cases, maximizing the likelihood of the data leads to exact learning rules for the couplings in the model. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and not on the specific spin history. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectatio...

  17. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  18. A viable method for goodness-of-fit test in maximum likelihood fit

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng; GAO Yuan-Ning; HUO Lei

    2011-01-01

    A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.

  19. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  20. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    Directory of Open Access Journals (Sweden)

    K. Yao

    2007-12-01

    Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.

  1. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    Science.gov (United States)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-10-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.

  2. Derivation of the Mass Distribution of Extrasolar Planets with MAXLIMA - a Maximum Likelihood Algorithm

    CERN Document Server

    Zucker, S W; Zucker, Shay; Mazeh, Tsevi

    2001-01-01

    We construct a maximum-likelihood algorithm - MAXLIMA, to derive the mass distribution of the extrasolar planets when only the minimum masses are observed. The algorithm derives the distribution by solving a numerically stable set of equations, and does not need any iteration or smoothing. Based on 50 minimum masses, MAXLIMA yields a distribution which is approximately flat in log M, and might rise slightly towards lower masses. The frequency drops off very sharply when going to masses higher than 10 Jupiter masses, although we suspect there is still a higher mass tail that extends up to probably 20 Jupiter masses. We estimate that 5% of the G stars in the solar neighborhood have planets in the range of 1-10 Jupiter masses with periods shorter than 1500 days. For comparison we present the mass distribution of stellar companions in the range of 100--1000 Jupiter masses, which is also approximately flat in log M. The two populations are separated by the "brown-dwarf desert", a fact that strongly supports the id...

  3. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    Science.gov (United States)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  4. Maximum-Likelihood Adaptive Filter for Partially Observed Boolean Dynamical Systems

    Science.gov (United States)

    Imani, Mahdi; Braga-Neto, Ulisses M.

    2017-01-01

    Partially-observed Boolean dynamical systems (POBDS) are a general class of nonlinear models with application in estimation and control of Boolean processes based on noisy and incomplete measurements. The optimal minimum mean square error (MMSE) algorithms for POBDS state estimation, namely, the Boolean Kalman filter (BKF) and Boolean Kalman smoother (BKS), are intractable in the case of large systems, due to computational and memory requirements. To address this, we propose approximate MMSE filtering and smoothing algorithms based on the auxiliary particle filter (APF) method from sequential Monte-Carlo theory. These algorithms are used jointly with maximum-likelihood (ML) methods for simultaneous state and parameter estimation in POBDS models. In the presence of continuous parameters, ML estimation is performed using the expectation-maximization (EM) algorithm; we develop for this purpose a special smoother which reduces the computational complexity of the EM algorithm. The resulting particle-based adaptive filter is applied to a POBDS model of Boolean gene regulatory networks observed through noisy RNA-Seq time series data, and performance is assessed through a series of numerical experiments using the well-known cell cycle gene regulatory model.

  5. Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆

    Science.gov (United States)

    Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther

    2013-01-01

    The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335

  6. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  7. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  8. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    Energy Technology Data Exchange (ETDEWEB)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  9. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    Energy Technology Data Exchange (ETDEWEB)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  10. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    Science.gov (United States)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  11. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  12. Blind Joint Maximum Likelihood Channel Estimation and Data Detection for SIMO Systems

    Institute of Scientific and Technical Information of China (English)

    Sheng Chen; Xiao-Chen Yang; Lei Chen; Lajos Hanzo

    2007-01-01

    A blind adaptive scheme is proposed for joint maximum likelihood (ML) channel estimation and data detection of singleinput multiple-output (SIMO) systems. The joint ML optimisation over channel and data is decomposed into an iterative optimisation loop. An efficient global optimisation algorithm called the repeated weighted boosting search is employed at the upper level to optimally identify the unknown SIMO channel model, and the Viterbi algorithm is used at the lower level to produce the maximum likelihood sequence estimation of the unknown data sequence. A simulation example is used to demonstrate the effectiveness of this joint ML optimisation scheme for blind adaptive SIMO systems.

  13. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    Science.gov (United States)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  14. Estimation of bias errors in measured airplane responses using maximum likelihood method

    Science.gov (United States)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  15. Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems

    Directory of Open Access Journals (Sweden)

    Li Bing

    2014-01-01

    Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.

  16. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    Science.gov (United States)

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  17. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  18. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-02-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  19. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  20. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    Science.gov (United States)

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  1. A note on the maximum likelihood estimator in the gamma regression model

    Directory of Open Access Journals (Sweden)

    Jerzy P. Rydlewski

    2009-01-01

    Full Text Available This paper considers a nonlinear regression model, in which the dependent variable has the gamma distribution. A model is considered in which the shape parameter of the random variable is the sum of continuous and algebraically independent functions. The paper proves that there is exactly one maximum likelihood estimator for the gamma regression model.

  2. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    Science.gov (United States)

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  3. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  4. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    Science.gov (United States)

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  5. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  6. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    Science.gov (United States)

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  7. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  8. Maximum Likelihood based comparison of the specific growth rates for P. aeruginosa and four mutator strains

    DEFF Research Database (Denmark)

    Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard

    2008-01-01

    The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements a...

  9. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    DEFF Research Database (Denmark)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk;

    2014-01-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR) ...

  10. Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper

    2016-01-01

    We propose a novel Power Spectral Density (PSD) estimator for multi-microphone systems operating in reverberant and noisy conditions. The estimator is derived using the maximum likelihood approach and is based on a blocked and pre-whitened additive signal model. The intended application......, the difference between algorithms was found to be statistically significant only in some of the experimental conditions....

  11. Asymptotic Properties of the Maximum Likelihood Estimate in Generalized Linear Models with Stochastic Regressors

    Institute of Scientific and Technical Information of China (English)

    Jie Li DING; Xi Ru CHEN

    2006-01-01

    For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.

  12. On the Loss of Information in Conditional Maximum Likelihood Estimation of Item Parameters.

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2000-01-01

    Shows that the concept of F-information, a generalization of Fisher information, is a useful took for evaluating the loss of information in conditional maximum likelihood (CML) estimation. With the F-information concept it is possible to investigate the conditions under which there is no loss of information in CML estimation and to quantify a loss…

  13. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua;

    2015-01-01

    -free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  14. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...

  15. On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.

    Science.gov (United States)

    Fischer, Gerhard H.

    1981-01-01

    Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)

  16. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning.

    Science.gov (United States)

    Wang, Hui; Rose, Sherri; van der Laan, Mark J

    2011-07-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach.

  17. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  18. A comparison of least squares and conditional maximum likelihood estimators under volume endpoint censoring in tumor growth experiments.

    Science.gov (United States)

    Roy Choudhury, Kingshuk; O'Sullivan, Finbarr; Kasman, Ian; Plowman, Greg D

    2012-12-20

    Measurements in tumor growth experiments are stopped once the tumor volume exceeds a preset threshold: a mechanism we term volume endpoint censoring. We argue that this type of censoring is informative. Further, least squares (LS) parameter estimates are shown to suffer a bias in a general parametric model for tumor growth with an independent and identically distributed measurement error, both theoretically and in simulation experiments. In a linear growth model, the magnitude of bias in the LS growth rate estimate increases with the growth rate and the standard deviation of measurement error. We propose a conditional maximum likelihood estimation procedure, which is shown both theoretically and in simulation experiments to yield approximately unbiased parameter estimates in linear and quadratic growth models. Both LS and maximum likelihood estimators have similar variance characteristics. In simulation studies, these properties appear to extend to the case of moderately dependent measurement error. The methodology is illustrated by application to a tumor growth study for an ovarian cancer cell line.

  19. Machine learning approximation techniques using dual trees

    OpenAIRE

    Ergashbaev, Denis

    2015-01-01

    This master thesis explores a dual-tree framework as applied to a particular class of machine learning problems that are collectively referred to as generalized n-body problems. It builds a new algorithm on top of it and improves existing Boosted OGE classifier.

  20. Bootstrap, Bayesian probability and maximum likelihood mapping: exploring new tools for comparative genome analyses

    Directory of Open Access Journals (Sweden)

    Gogarten J Peter

    2002-02-01

    Full Text Available Abstract Background Horizontal gene transfer (HGT played an important role in shaping microbial genomes. In addition to genes under sporadic selection, HGT also affects housekeeping genes and those involved in information processing, even ribosomal RNA encoding genes. Here we describe tools that provide an assessment and graphic illustration of the mosaic nature of microbial genomes. Results We adapted the Maximum Likelihood (ML mapping to the analyses of all detected quartets of orthologous genes found in four genomes. We have automated the assembly and analyses of these quartets of orthologs given the selection of four genomes. We compared the ML-mapping approach to more rigorous Bayesian probability and Bootstrap mapping techniques. The latter two approaches appear to be more conservative than the ML-mapping approach, but qualitatively all three approaches give equivalent results. All three tools were tested on mitochondrial genomes, which presumably were inherited as a single linkage group. Conclusions In some instances of interphylum relationships we find nearly equal numbers of quartets strongly supporting the three possible topologies. In contrast, our analyses of genome quartets containing the cyanobacterium Synechocystis sp. indicate that a large part of the cyanobacterial genome is related to that of low GC Gram positives. Other groups that had been suggested as sister groups to the cyanobacteria contain many fewer genes that group with the Synechocystis orthologs. Interdomain comparisons of genome quartets containing the archaeon Halobacterium sp. revealed that Halobacterium sp. shares more genes with Bacteria that live in the same environment than with Bacteria that are more closely related based on rRNA phylogeny . Many of these genes encode proteins involved in substrate transport and metabolism and in information storage and processing. The performed analyses demonstrate that relationships among prokaryotes cannot be accurately

  1. Tree-space statistics and approximations for large-scale analysis of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Petersen, Jens; Winkler Wille, Mathilde Marie;

    2013-01-01

    parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than...

  2. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  3. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  4. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    Science.gov (United States)

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  5. Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.

    Science.gov (United States)

    Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man

    2009-10-01

    In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.

  6. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    Science.gov (United States)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  7. Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood

    Directory of Open Access Journals (Sweden)

    Jesser J. Marulanda-Durango

    2012-12-01

    Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.

  8. LASER: A Maximum Likelihood Toolkit for Detecting Temporal Shifts in Diversification Rates From Molecular Phylogenies

    Directory of Open Access Journals (Sweden)

    Daniel L. Rabosky

    2006-01-01

    Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.

  9. Combined simplified maximum likelihood and sphere decoding algorithm for MIMO system

    Institute of Scientific and Technical Information of China (English)

    ZHANG Lei; YUAN Ting-ting; ZHANG Xin; YANG Da-cheng

    2008-01-01

    In this article, a new system model for sphere decoding (SD) algorithm is introduced. For the multiple- input multiple-out (MIMO) system, a simplified maximum likelihood (SML) decoding algorithm is proposed based on the new model. The SML algorithm achieves optimal maximum likelihood (ML) performance, and drastically reduces the complexity as compared to the conventional SD algorithm. The improved algorithm is presented by combining the sphere decoding algorithm based on Schnorr-Euchner strategy (SE-SD) with the SML algorithm when the number of transmit antennas exceeds 2. Compared to conventional SD, the proposed algorithm has low complexity especially at low signal to noise ratio (SNR). It is shown by simulation that the proposed algorithm has performance very close to conventional SD.

  10. Tree-fold loop approximation of AMD

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Akira [Tohoku Univ., Sendai (Japan). Faculty of Science

    1997-05-01

    AMD (antisymmetrized molecular dynamics) is a frame work for describing a wave function of nucleon multi-body system by Slater determinant of Gaussian wave flux, and a theory for integrally describing a wide range of nuclear reactions such as intermittent energy heavy ion reaction, nucleon incident reaction and so forth. The aim of this study is induction on approximation equation of expected value, {nu}, in correlation capable of calculation with time proportional A (exp 3) (or lower), and to make AMD applicable to the heavier system such as Au+Au. As it must be avoided to break characteristics of AMD, it needs not to be anxious only by approximating the {nu}-value. However, in order to give this approximation any meaning, error of this approximation will have to be sufficiently small in comparison with bond energy of atomic nucleus and smaller than 1 MeV/nucleon. As the absolute expected value in correlation may be larger than 50 MeV/nucleon, the approximation is required to have a high accuracy within 2 percent. (G.K.)

  11. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    Science.gov (United States)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  12. Maximum-likelihood Estimation of 3D Event Position in Monolithic Scintillation Crystals: Experimental Results

    OpenAIRE

    Moore, S K; Hunter, W C J; Furenlid, L.R.; Barrett, H. H.

    2007-01-01

    We present a simple 3D event position-estimation method using raw list-mode acquisition and maximum-likelihood estimation in a modular gamma camera with a thick (25mm) monolithic scintillation crystal. This method involves measuring 2D calibration scans with a well-collimated 511 keV source and fitting each point to a simple depth-dependent light distribution model. Preliminary results show that angled collimated beams appear properly reconstructed.

  13. Second order pseudo-maximum likelihood estimation and conditional variance misspecification

    OpenAIRE

    Lejeune, Bernard

    1997-01-01

    In this paper, we study the behavior of second order pseudo-maximum likelihood estimators under conditional variance misspecification. We determine sufficient and essentially necessary conditions for such a estimator to be, regardless of the conditional variance (mis)specification, consistent for the mean parameters when the conditional mean is correctly specified. These conditions implie that, even if mean and variance parameters vary independently, standard PML2 estimators are generally not...

  14. ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS

    Institute of Scientific and Technical Information of China (English)

    YUE LI; CHEN XIRU

    2005-01-01

    For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.

  15. On the rate of convergence of the maximum likelihood estimator of a k-monotone density

    Institute of Scientific and Technical Information of China (English)

    WELLNER; Jon; A

    2009-01-01

    Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.

  16. On the rate of convergence of the maximum likelihood estimator of a K-monotone density

    Institute of Scientific and Technical Information of China (English)

    GAO FuChang; WELLNER Jon A

    2009-01-01

    Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.

  17. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  18. TURBO DECODER USING LOCAL SUBSIDIARY MAXIMUM LIKELIHOOD DECODING IN PRIOR ESTIMATION OF THE EXTRINSIC INFORMATION

    Institute of Scientific and Technical Information of China (English)

    Yang Fengfan

    2004-01-01

    A new technique for turbo decoder is proposed by using a local subsidiary maximum likelihood decoding and a probability distributions family for the extrinsic information. The optimal distribution of the extrinsic information is dynamically specified for each component decoder.The simulation results show that the iterative decoder with the new technique outperforms that of the decoder with the traditional Gaussian approach for the extrinsic information under the same conditions.

  19. Phase noise investigation of maximum likelihood estimation method for airborne multibaseline SAR interferometry

    OpenAIRE

    Magnard, Christophe; Small, David; Meier, Erich

    2015-01-01

    The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the interme...

  20. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    CERN Document Server

    Hall, Alex

    2016-01-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...

  1. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    Science.gov (United States)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  2. Maximum likelihood-based iterated divided difference filter for nonlinear systems from discrete noisy measurements.

    Science.gov (United States)

    Wang, Changyuan; Zhang, Jing; Mu, Jing

    2012-01-01

    A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF) is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF), divided difference filter (DDF), iterated unscented Kalman filter (IUKF) and iterated divided difference filter (IDDF) both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate.

  3. Determination of lift and drag characteristics of Space Shuttle Orbiter using maximum likelihood estimation technique

    Science.gov (United States)

    Trujillo, B. M.

    1986-01-01

    This paper presents the technique and results of maximum likelihood estimation used to determine lift and drag characteristics of the Space Shuttle Orbiter. Maximum likelihood estimation uses measurable parameters to estimate nonmeasurable parameters. The nonmeasurable parameters for this case are elements of a nonlinear, dynamic model of the orbiter. The estimated parameters are used to evaluate a cost function that computes the differences between the measured and estimated longitudinal parameters. The case presented is a dynamic analysis. This places less restriction on pitching motion and can provide additional information about the orbiter such as lift and drag characteristics at conditions other than trim, instrument biases, and pitching moment characteristics. In addition, an output of the analysis is an estimate of the values for the individual components of lift and drag that contribute to the total lift and drag. The results show that maximum likelihood estimation is a useful tool for analysis of Space Shuttle Orbiter performance and is also applicable to parameter analysis of other types of aircraft.

  4. Maximum Likelihood-Based Iterated Divided Difference Filter for Nonlinear Systems from Discrete Noisy Measurements

    Directory of Open Access Journals (Sweden)

    Changyuan Wang

    2012-06-01

    Full Text Available A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF, divided difference filter (DDF, iterated unscented Kalman filter (IUKF and iterated divided difference filter (IDDF both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate.

  5. A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.

    Science.gov (United States)

    Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming

    2008-01-01

    This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.

  6. Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    GaoChunwen; XuJingzhen; RichardSinding-Larsen

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  7. A Nuclear Ribosomal DNA Phylogeny of Acer Inferred with Maximum Likelihood, Splits Graphs, and Motif Analysis of 606 Sequences

    Directory of Open Access Journals (Sweden)

    Guido W. Grimm

    2006-01-01

    Full Text Available The multi-copy internal transcribed spacer (ITS region of nuclear ribosomal DNA is widely used to infer phylogenetic relationships among closely related taxa. Here we use maximum likelihood (ML and splits graph analyses to extract phylogenetic information from ~ 600 mostly cloned ITS sequences, representing 81 species and subspecies of Acer, and both species of its sister Dipteronia. Additional analyses compared sequence motifs in Acer and several hundred Anacardiaceae, Burseraceae, Meliaceae, Rutaceae, and Sapindaceae ITS sequences in GenBank. We also assessed the effects of using smaller data sets of consensus sequences with ambiguity coding (accounting for within-species variation instead of the full (partly redundant original sequences. Neighbor-nets and bipartition networks were used to visualize conflict among character state patterns. Species clusters observed in the trees and networks largely agree with morphology-based classifications; of de Jong’s (1994 16 sections, nine are supported in neighbor-net and bipartition networks, and ten by sequence motifs and the ML tree; of his 19 series, 14 are supported in networks, motifs, and the ML tree. Most nodes had higher bootstrap support with matrices of 105 or 40 consensus sequences than with the original matrix. Within-taxon ITS divergence did not differ between diploid and polyploid Acer, and there was little evidence of differentiated parental ITS haplotypes, suggesting that concerted evolution in Acer acts rapidly.

  8. MAXIMUM LIKELIHOOD SOURCE SEPARATION FOR FINITE IMPULSE RESPONSE MULTIPLE INPUT-MULTIPLE OUTPUT CHANNELS IN THE PRESENCE OF ADDITIVE NOISE

    Institute of Scientific and Technical Information of China (English)

    Kazi Takpaya; Wei Gang

    2003-01-01

    Blind identification-blind equalization for Finite Impulse Response (FIR) Multiple Input-Multiple Output (MIMO) channels can be reformulated as the problem of blind sources separation. It has been shown that blind identification via decorrelating sub-channels method could recover the input sources. The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators, which decorrelate the output signals of subchannels, and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix. In this paper, a new approximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed. The proposed method outperforms BIDS in the presence of additive white Gaussian noise.

  9. MLE's bias pathology, Model Updated Maximum Likelihood Estimates and Wallace's Minimum Message Length method

    OpenAIRE

    Yatracos, Yannis G.

    2013-01-01

    The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...

  10. Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.

  11. Maximum Likelihood PSD Estimation for Speech Enhancement in Reverberation and Noise

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Søren Holdt

    2016-01-01

    In this contribution we focus on the problem of power spectral density (PSD) estimation from multiple microphone signals in reverberant and noisy environments. The PSD estimation method proposed in this paper is based on the maximum likelihood (ML) methodology. In particular, we derive a novel ML...... PSD estimation scheme that is suitable for sound scenes which besides speech and reverberation consist of an additional noise component whose second-order statistics are known. The proposed algorithm is shown to outperform an existing similar algorithm in terms of PSD estimation accuracy. Moreover...

  12. Maximum-Likelihood Detection for Energy-Efficient Timing Acquisition in NB-IoT

    OpenAIRE

    2016-01-01

    Initial timing acquisition in narrow-band IoT (NB-IoT) devices is done by detecting a periodically transmitted known sequence. The detection has to be done at lowest possible latency, because the RF-transceiver, which dominates downlink power consumption of an NB-IoT modem, has to be turned on throughout this time. Auto-correlation detectors show low computational complexity from a signal processing point of view at the price of a higher detection latency. In contrast a maximum likelihood cro...

  13. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  14. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  15. Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.

    Science.gov (United States)

    Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z

    2007-08-15

    Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.

  16. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    Science.gov (United States)

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  17. Adaptive speckle reduction of ultrasound images based on maximum likelihood estimation

    Institute of Scientific and Technical Information of China (English)

    Xu Liu(刘旭); Yongfeng Huang(黄永锋); Wende Shou(寿文德); Tao Ying(应涛)

    2004-01-01

    A method has been developed in this paper to gain effective speckle reduction in medical ultrasound images.To exploit full knowledge of the speckle distribution, here maximum likelihood was used to estimate speckle parameters corresponding to its statistical mode. Then the results were incorporated into the nonlinear anisotropic diffusion to achieve adaptive speckle reduction. Verified with simulated and ultrasound images,we show that this algorithm is capable of enhancing features of clinical interest and reduces speckle noise more efficiently than just applying classical filters. To avoid edge contribution, changes of contrast-to-noise ratio of different regions are also compared to investigate the performance of this approach.

  18. Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory

    CERN Document Server

    Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O

    2014-01-01

    We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.

  19. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  20. Community detection in networks: Modularity optimization and maximum likelihood are equivalent

    CERN Document Server

    Newman, M E J

    2016-01-01

    We demonstrate an exact equivalence between two widely used methods of community detection in networks, the method of modularity maximization in its generalized form which incorporates a resolution parameter controlling the size of the communities discovered, and the method of maximum likelihood applied to the special case of the stochastic block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.

  1. Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids

    DEFF Research Database (Denmark)

    Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt;

    2014-01-01

    We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...

  2. Phase Noise Investigation of Maximum Likelihood Estimation Method for Airborne Multibaseline SAR Interferometry

    Science.gov (United States)

    Magnard, C.; Small, D.; Meier, E.

    2015-03-01

    The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the intermediate baselines to unwrap the phase values from the longest baseline. The phase noise was analyzed for both methods: in most cases, a small improvement was found when the ML method was used.

  3. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-02-09

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE1 and MLE2, respectively), and Greenwood approximation (MLEgw) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE1, the MLE2 and MLEgw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE2 and MLEgw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE2 and MLEgw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization.

  4. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    Science.gov (United States)

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest.

  5. Maximum-Likelihood Semiblind Equalization of Doubly Selective Channels Using the EM Algorithm

    Directory of Open Access Journals (Sweden)

    Gideon Kutz

    2010-01-01

    Full Text Available Maximum-likelihood semi-blind joint channel estimation and equalization for doubly selective channels and single-carrier systems is proposed. We model the doubly selective channel as an FIR filter where each filter tap is modeled as a linear combination of basis functions. This channel description is then integrated in an iterative scheme based on the expectation-maximization (EM principle that converges to the channel description vector estimation. We discuss the selection of the basis functions and compare various functions sets. To alleviate the problem of convergence to a local maximum, we propose an initialization scheme to the EM iterations based on a small number of pilot symbols. We further derive a pilot positioning scheme targeted to reduce the probability of convergence to a local maximum. Our pilot positioning analysis reveals that for high Doppler rates it is better to spread the pilots evenly throughout the data block (and not to group them even for frequency-selective channels. The resulting equalization algorithm is shown to be superior over previously proposed equalization schemes and to perform in many cases close to the maximum-likelihood equalizer with perfect channel knowledge. Our proposed method is also suitable for coded systems and as a building block for Turbo equalization algorithms.

  6. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  7. Benefits of maximum likelihood estimators for fracture attribute analysis: Implications for permeability and up-scaling

    Science.gov (United States)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2017-02-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in rocks, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture lengths and apertures are fundamental to estimate bulk permeability and therefore fluid flow, especially for rocks with low primary porosity where most of the flow takes place within fractures. We collected outcrop data from a fractured upper Miocene biosiliceous mudstone formation (California, USA), which exhibits seepage of bitumen-rich fluids through the fractures. The dataset was analysed using Maximum Likelihood Estimators to extract the underlying scaling parameters, and we found a log-normal distribution to be the best representative statistic for both fracture lengths and apertures in the study area. By applying Maximum Likelihood Estimators on outcrop fracture data, we generate fracture network models with the same statistical attributes to the ones observed on outcrop, from which we can achieve more robust predictions of bulk permeability.

  8. A maximum likelihood approach to estimating articulator positions from speech acoustics

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  9. Applications of non-standard maximum likelihood techniques in energy and resource economics

    Science.gov (United States)

    Moeltner, Klaus

    Two important types of non-standard maximum likelihood techniques, Simulated Maximum Likelihood (SML) and Pseudo-Maximum Likelihood (PML), have only recently found consideration in the applied economic literature. The objective of this thesis is to demonstrate how these methods can be successfully employed in the analysis of energy and resource models. Chapter I focuses on SML. It constitutes the first application of this technique in the field of energy economics. The framework is as follows: Surveys on the cost of power outages to commercial and industrial customers usually capture multiple observations on the dependent variable for a given firm. The resulting pooled data set is censored and exhibits cross-sectional heterogeneity. We propose a model that addresses these issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order cumulative distribution terms. This adjustment requires the use of SML in the estimation process. Our framework allows for a more comprehensive analysis of outage costs than existing models, which rely on the assumptions of parameter constancy and cross-sectional homogeneity. Our results strongly reject both of these restrictions. The central topic of the second Chapter is the use of PML, a robust estimation technique, in count data analysis of visitor demand for a system of recreation sites. PML has been popular with researchers in this context, since it guards against many types of mis-specification errors. We demonstrate, however, that estimation results will generally be biased even if derived through PML if the recreation model is based on aggregate, or zonal data. To countervail this problem, we propose a zonal model of recreation that captures some of the underlying heterogeneity of individual visitors by incorporating distributional information on per-capita income into the aggregate demand function. This adjustment

  10. Tree-space statistics and approximations for large-scale analysis of anatomical trees.

    Science.gov (United States)

    Feragen, Aasa; Owen, Megan; Petersen, Jens; Wille, Mathilde M W; Thomsen, Laura H; Dirksen, Asger; de Bruijne, Marleen

    2013-01-01

    Statistical analysis of anatomical trees is hard to perform due to differences in the topological structure of the trees. In this paper we define statistical properties of leaf-labeled anatomical trees with geometric edge attributes by considering the anatomical trees as points in the geometric space of leaf-labeled trees. This tree-space is a geodesic metric space where any two trees are connected by a unique shortest path, which corresponds to a tree deformation. However, tree-space is not a manifold, and the usual strategy of performing statistical analysis in a tangent space and projecting onto tree-space is not available. Using tree-space and its shortest paths, a variety of statistical properties, such as mean, principal component, hypothesis testing and linear discriminant analysis can be defined. For some of these properties it is still an open problem how to compute them; others (like the mean) can be computed, but efficient alternatives are helpful in speeding up algorithms that use means iteratively, like hypothesis testing. In this paper, we take advantage of a very large dataset (N = 8016) to obtain computable approximations, under the assumption that the data trees parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than healthy ones. Software is available from http://image.diku.dk/aasa/software.php.

  11. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  12. Regularization of constrained maximum likelihood iterative algorithms by means of statistical stopping rule

    CERN Document Server

    Benvenuto, Federico

    2012-01-01

    In this paper we propose a new statistical stopping rule for constrained maximum likelihood iterative algorithms applied to ill-posed inverse problems. To this aim we extend the definition of Tikhonov regularization in a statistical framework and prove that the application of the proposed stopping rule to the Iterative Space Reconstruction Algorithm (ISRA) in the Gaussian case and Expectation Maximization (EM) in the Poisson case leads to well defined regularization methods according to the given definition. We also prove that, if an inverse problem is genuinely ill-posed in the sense of Tikhonov, the same definition is not satisfied when ISRA and EM are optimized by classical stopping rule like Morozov's discrepancy principle, Pearson's test and Poisson discrepancy principle. The stopping rule is illustrated in the case of image reconstruction from data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). First, by using a simulated image consisting of structures analogous to those ...

  13. On the maximum likelihood training of gradient-enhanced spatial Gaussian processes

    DEFF Research Database (Denmark)

    Zimmermann, Ralf

    2013-01-01

    Spatial Gaussian processes, alias spatial linear models or Kriging estimators, are a powerful and well-established tool for the design and analysis of computer experiments in a multitude of engineering applications. A key challenge in constructing spatial Gaussian processes is the training...... of the predictor by numerically optimizing its associated maximum likelihood function depending on so-called hyper-parameters. This is well understood for standard Kriging predictors, i.e., without considering derivative information. For gradient-enhanced Kriging predictors it is an open question of whether...... the gradient-enhanced predictor’s likelihood function. The proof works by computational rather than probabilistic arguments and exposes as a secondary effect the connec- tion between the direct and the indirect approach to gradient-enhanced Kriging, both of which are widely used in applications...

  14. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    Science.gov (United States)

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  15. Maximum Likelihood A Priori Knowledge Interpolation-Based Handset Mismatch Compensation for Robust Speaker Identification

    Institute of Scientific and Technical Information of China (English)

    LIAO Yuanfu; ZHUANG Zhixian; YANG Jyhher

    2008-01-01

    Unseen handset mismatch is the major source of performance degradation in speaker identifica-tion in telecommunication environments.To alleviate the problem,a maximum likelihood a priori knowledge interpolation (ML-AKI)-based handset mismatch compensation approach is proposed.It first collects a set of handset characteristics of seen handsets to use as the a priori knowledge for representing the space of handsets.During evaluation the characteristics of an unknown test handset are optimally estimated by in-terpolation from the set of the a pdod knowledge.Experimental results on the HTIMIT database show that the ML-AKI method can improve the average speaker identification rate from 60.0% to 74.6% as compared with conventional maximum a posteriori-adapted Gaussian mixture models.The proposed ML-AKI method is a promising method for robust speaker identification.

  16. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  17. An extended-source spatial acquisition process based on maximum likelihood criterion for planetary optical communications

    Science.gov (United States)

    Yan, Tsun-Yee

    1992-01-01

    This paper describes an extended-source spatial acquisition process based on the maximum likelihood criterion for interplanetary optical communications. The objective is to use the sun-lit Earth image as a receiver beacon and point the transmitter laser to the Earth-based receiver to establish a communication path. The process assumes the existence of a reference image. The uncertainties between the reference image and the received image are modeled as additive white Gaussian disturbances. It has been shown that the optimal spatial acquisition requires solving two nonlinear equations to estimate the coordinates of the transceiver from the received camera image in the transformed domain. The optimal solution can be obtained iteratively by solving two linear equations. Numerical results using a sample sun-lit Earth as a reference image demonstrate that sub-pixel resolutions can be achieved in a high disturbance environment. Spatial resolution is quantified by Cramer-Rao lower bounds.

  18. Efficient Maximum Likelihood Estimation of a 2-D Complex Sinusoidal Based on Barycentric Interpolation

    CERN Document Server

    Selva, J

    2011-01-01

    This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.

  19. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    Science.gov (United States)

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  20. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  1. A New Maximum-Likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales

    CERN Document Server

    Ahlers, Markus; Desiati, Paolo; Díaz-Vélez, Juan Carlos; Fiorino, Daniel W; Westerhoff, Stefan

    2016-01-01

    The arrival directions of TeV-PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.

  2. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  3. Estimation of Road Vehicle Speed Using Two Omnidirectional Microphones: A Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    López-Valcarce Roberto

    2004-01-01

    Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.

  4. Joint maximum likelihood estimation of carrier and sampling frequency offsets for OFDM systems

    CERN Document Server

    Kim, Y H

    2010-01-01

    In orthogonal-frequency division multiplexing (OFDM) systems, carrier and sampling frequency offsets (CFO and SFO, respectively) can destroy the orthogonality of the subcarriers and degrade system performance. In the literature, Nguyen-Le, Le-Ngoc, and Ko proposed a simple maximum-likelihood (ML) scheme using two long training symbols for estimating the initial CFO and SFO of a recursive least-squares (RLS) estimation scheme. However, the results of Nguyen-Le's ML estimation show poor performance relative to the Cramer-Rao bound (CRB). In this paper, we extend Moose's CFO estimation algorithm to joint ML estimation of CFO and SFO using two long training symbols. In particular, we derive CRBs for the mean square errors (MSEs) of CFO and SFO estimation. Simulation results show that the proposed ML scheme provides better performance than Nguyen-Le's ML scheme.

  5. Level spacing of U(5) \\leftrightarrow SO(6) transitional region with maximum likelihood estimation method

    CERN Document Server

    Jafarizadeh, M A; Sabric, H; Malekic, B Rashidian

    2011-01-01

    In this paper,a systematic study of quantum phase transition within U(5) \\leftrightarrow SO(6) limits is presented in terms of infinite dimensional Algebraic technique in the IBM framework. Energy level statistics are investigated with Maximum Likelihood Estimation (MLE) method in order to characterize transitional region. Eigenvalues of these systems are obtained by solving Bethe-Ansatz equations with least square fitting processes to experimental data to obtain constants of Hamiltonian. Our obtained results verify the dependence of Nearest Neighbor Spacing Distribution's (NNSD) parameter to control parameter (c_{s}) and also display chaotic behavior of transitional regions in comparing with both limits. In order to compare our results for two limits with both GUE and GOE ensembles, we have suggested a new NNSD distribution and have obtained better KLD distances for the new distribution in compared with others in both limits. Also in the case of N\\to\\infty, the total boson number dependence displays the univ...

  6. Plotting positions via maximum-likelihood for a non-standard situation

    Directory of Open Access Journals (Sweden)

    D. A. Jones

    1997-01-01

    Full Text Available A new approach is developed for the specification of the plotting positions used in the frequency analysis of extreme flows, rainfalls or similar data. The approach is based on the concept of maximum likelihood estimation and it is applied here to provide plotting positions for a range of problems which concern non-standard versions of annual-maximum data. This range covers the inclusion of incomplete years of data and also the treatment of cases involving regional maxima, where the number of sites considered varies from year to year. These problems, together with a not-to-be-recommended approach to using historical information, can be treated as special cases of a non-standard situation in which observations arise from different statistical distributions which vary in a simple, known, way.

  7. Galaxy and Mass Assembly (GAMA): maximum likelihood determination of the luminosity function and its evolution

    CERN Document Server

    Loveday, J; Baldry, I K; Bland-Hawthorn, J; Brough, S; Brown, M J I; Driver, S P; Kelvin, L S; Phillipps, S

    2015-01-01

    We describe modifications to the joint stepwise maximum likelihood method of Cole (2011) in order to simultaneously fit the GAMA-II galaxy luminosity function (LF), corrected for radial density variations, and its evolution with redshift. The whole sample is reasonably well-fit with luminosity (Qe) and density (Pe) evolution parameters Qe, Pe = 1.0, 1.0 but with significant degeneracies characterized by Qe = 1.4 - 0.4Pe. Blue galaxies exhibit larger luminosity density evolution than red galaxies, as expected. We present the evolution-corrected r-band LF for the whole sample and for blue and red sub-samples, using both Petrosian and Sersic magnitudes. Petrosian magnitudes miss a substantial fraction of the flux of de Vaucouleurs profile galaxies: the Sersic LF is substantially higher than the Petrosian LF at the bright end.

  8. Maximum-likelihood detection based on branch and bound algorithm for MIMO systems

    Institute of Scientific and Technical Information of China (English)

    LI Zi; CAI YueMing

    2008-01-01

    Maximum likelihood detection for MIMO systems can be formulated as an integer quadratic programming problem. In this paper, we introduce depth-first branch and bound algorithm with variable dichotomy into MIMO detection. More nodes may be pruned with this structure. At each stage of the branch and bound algorithm, active set algorithm is adopted to solve the dual subproblem. In order to reduce the com- plexity further, the Cholesky factorization update is presented to solve the linear system at each iteration of active set algorithm efficiently. By relaxing the pruning conditions, we also present the quasi branch and bound algorithm which imple- ments a good tradeoff between performance and complexity. Numerical results show that the complexity of MIMO detection based on branch and bound algorithm is very low, especially in low SNR and large constellations.

  9. Robust maximum-likelihood parameter estimation of stochastic state-space systems based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper addresses the problems of parameter estimation of multivariable stationary stochastic systems on the basis of observed output data. The main contribution is to employ the expectation-maximisation (EM) method as a means for computation of the maximum-likelihood (ML) parameter estimation of the system. Closed form of the expectation of the studied system subjected to Gaussian distribution noise is derived and paraneter choice that maximizes the expectation is also proposed. This results in an iterative algorithm for parameter estimation and the robust algorithm implementation based on technique of QR-factorization and Cholesky factorization is also discussed. Moreover, algorithmic properties such as non-decreasing likelihood value, necessary and sufficient conditions for the algorithm to arrive at a local stationary parameter, the convergence rate and the factors affecting the convergence rate are analyzed. Simulation study shows that the proposed algorithm has attractive properties such as numerical stability, and avoidance of difficult initial conditions.

  10. Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes

    Institute of Scientific and Technical Information of China (English)

    Hong Chang HU; Lei SONG

    2014-01-01

    The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.

  11. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    CERN Document Server

    Bian, Liheng; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for error removal. Results on both simulated data and real data captured using our laser FPM setup show that the proposed...

  12. Maximum likelihood q-estimator reveals nonextensivity regulated by extracellular potassium in the mammalian neuromuscular junction

    CERN Document Server

    da Silva, A J; Santos, D O C; Lima, R F

    2013-01-01

    Recently, we demonstrated the existence of nonextensivity in neuromuscular transmission [Phys. Rev. E 84, 041925 (2011)]. In the present letter, we propose a general criterion based on the q-calculus foundations and nonextensive statistics to estimate the values for both scale factor and q-index using the maximum likelihood q-estimation method (MLqE). We next applied our theoretical findings to electrophysiological recordings from neuromuscular junction (NMJ) where spontaneous miniature end plate potentials (MEPP) were analyzed. These calculations were performed in both normal and high extracellular potassium concentration, [K+]o. This protocol was assumed to test the validity of the q-index in electrophysiological conditions closely resembling physiological stimuli. Surprisingly, the analysis showed a significant difference between the q-index in high and normal [K+]o, where the magnitude of nonextensivity was increased. Our letter provides a general way to obtain the best q-index from the q-Gaussian distrib...

  13. Mlpnp - a Real-Time Maximum Likelihood Solution to the Perspective-N Problem

    Science.gov (United States)

    Urban, S.; Leitloff, J.; Hinz, S.

    2016-06-01

    In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.

  14. Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions

    Directory of Open Access Journals (Sweden)

    Morelli Michele

    2007-01-01

    Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.

  15. Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions

    Directory of Open Access Journals (Sweden)

    Michele Morelli

    2007-06-01

    Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.

  16. Comparison between artificial neural networks and maximum likelihood classification in digital soil mapping

    Directory of Open Access Journals (Sweden)

    César da Silva Chagas

    2013-04-01

    Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.

  17. Supervised maximum-likelihood weighting of composite protein networks for complex prediction

    Directory of Open Access Journals (Sweden)

    Yong Chern Han

    2012-12-01

    Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to

  18. Estimating parameters of generalized integrate-and-fire neurons from the maximum likelihood of spike trains.

    Science.gov (United States)

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2011-11-01

    When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.

  19. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  20. ESTIMATION OF THE PARAMETERS: ONESTEP ESTIMATORS ARE MORE PREFERABLE THAN MAXIMUM LIKELIHOOD ESTIMATORS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-05-01

    Full Text Available According to the new paradigm of applied mathematical statistics one should prefer non-parametric methods and models. However, in applied statistics we currently use a variety of parametric models. The term "parametric" means that the probabilistic-statistical model is fully described by a finite-dimensional vector of fixed dimension, and this dimension does not depend on the size of the sample. In parametric statistics the estimation problem is to estimate the unknown value (for statistician of parameter by means of the best (in some sense method. In the statistical problems of standardization and quality control we use a three-parameter family of gamma distributions. In this article, it is considered as an example of the parametric distribution family. We compare the methods for estimating the parameters. The method of moments is universal. However, the estimates obtained with the help of method of moments have optimal properties only in rare cases. Maximum likelihood estimation (MLE belongs to the class of the best asymptotically normal estimates. In most cases, analytical solutions do not exist; therefore, to find MLE it is necessary to apply numerical methods. However, the use of numerical methods creates numerous problems. Convergence of iterative algorithms requires justification. In a number of examples of the analysis of real data, the likelihood function has many local maxima, and because of that natural iterative procedures do not converge. We suggest the use of one-step estimates (OS-estimates. They have equally good asymptotic properties as the maximum likelihood estimators, under the same conditions of regularity that MLE. One-step estimates are written in the form of explicit formulas. In this article it is proved that the one-step estimates are the best asymptotically normal estimates (under natural conditions. We have found OS-estimates for the gamma distribution and given the results of calculations using data on operating time

  1. Bounds for Maximum Likelihood Regular and Non-Regular DoA Estimation in K-Distributed Noise

    Science.gov (United States)

    Abramovich, Yuri I.; Besson, Olivier; Johnson, Ben A.

    2015-11-01

    We consider the problem of estimating the direction of arrival of a signal embedded in $K$-distributed noise, when secondary data which contains noise only are assumed to be available. Based upon a recent formula of the Fisher information matrix (FIM) for complex elliptically distributed data, we provide a simple expression of the FIM with the two data sets framework. In the specific case of $K$-distributed noise, we show that, under certain conditions, the FIM for the deterministic part of the model can be unbounded, while the FIM for the covariance part of the model is always bounded. In the general case of elliptical distributions, we provide a sufficient condition for unboundedness of the FIM. Accurate approximations of the FIM for $K$-distributed noise are also derived when it is bounded. Additionally, the maximum likelihood estimator of the signal DoA and an approximated version are derived, assuming known covariance matrix: the latter is then estimated from secondary data using a conventional regularization technique. When the FIM is unbounded, an analysis of the estimators reveals a rate of convergence much faster than the usual $T^{-1}$. Simulations illustrate the different behaviors of the estimators, depending on the FIM being bounded or not.

  2. Maximum likelihood model based on minor allele frequencies and weighted Max-SAT formulation for haplotype assembly.

    Science.gov (United States)

    Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser

    2014-06-07

    Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results.

  3. Investigation of optimal parameters for penalized maximum-likelihood reconstruction applied to iodinated contrast-enhanced breast CT

    Science.gov (United States)

    Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.

    2016-03-01

    Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.

  4. THE GENERALIZED MAXIMUM LIKELIHOOD METHOD APPLIED TO HIGH PRESSURE PHASE EQUILIBRIUM

    Directory of Open Access Journals (Sweden)

    Lúcio CARDOZO-FILHO

    1997-12-01

    Full Text Available The generalized maximum likelihood method was used to determine binary interaction parameters between carbon dioxide and components of orange essential oil. Vapor-liquid equilibrium was modeled with Peng-Robinson and Soave-Redlich-Kwong equations, using a methodology proposed in 1979 by Asselineau, Bogdanic and Vidal. Experimental vapor-liquid equilibrium data on binary mixtures formed with carbon dioxide and compounds usually found in orange essential oil were used to test the model. These systems were chosen to demonstrate that the maximum likelihood method produces binary interaction parameters for cubic equations of state capable of satisfactorily describing phase equilibrium, even for a binary such as ethanol/CO2. Results corroborate that the Peng-Robinson, as well as the Soave-Redlich-Kwong, equation can be used to describe phase equilibrium for the following systems: components of essential oil of orange/CO2.Foi empregado o método da máxima verossimilhança generalizado para determinação de parâmetros de interação binária entre os componentes do óleo essencial de laranja e dióxido de carbono. Foram usados dados experimentais de equilíbrio líquido-vapor de misturas binárias de dióxido de carbono e componentes do óleo essencial de laranja. O equilíbrio líquido-vapor foi modelado com as equações de Peng-Robinson e de Soave-Redlich-Kwong usando a metodologia proposta em 1979 por Asselineau, Bogdanic e Vidal. A escolha destes sistemas teve como objetivo demonstrar que o método da máxima verosimilhança produz parâmetros de interação binária, para equações cúbicas de estado capazes de descrever satisfatoriamente até mesmo o equilíbrio para o binário etanol/CO2. Os resultados comprovam que tanto a equação de Peng-Robinson quanto a de Soave-Redlich-Kwong podem ser empregadas para descrever o equilíbrio de fases para o sistemas: componentes do óleo essencial de laranja/CO2.

  5. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    Science.gov (United States)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  6. Land Use Classification using Support Vector Machine and Maximum Likelihood Algorithms by Landsat 5 TM Images

    Directory of Open Access Journals (Sweden)

    Abbas TAATI

    2015-08-01

    Full Text Available Nowadays, remote sensing images have been identified and exploited as the latest information to study land cover and land uses. These digital images are of significant importance, since they can present timely information, and capable of providing land use maps. The aim of this study is to create land use classification using a support vector machine (SVM and maximum likelihood classifier (MLC in Qazvin, Iran, by TM images of the Landsat 5 satellite. In the pre-processing stage, the necessary corrections were applied to the images. In order to evaluate the accuracy of the 2 algorithms, the overall accuracy and kappa coefficient were used. The evaluation results verified that the SVM algorithm with an overall accuracy of 86.67 % and a kappa coefficient of 0.82 has a higher accuracy than the MLC algorithm in land use mapping. Therefore, this algorithm has been suggested to be applied as an optimal classifier for extraction of land use maps due to its higher accuracy and better consistency within the study area.

  7. Maximum likelihood estimates of two-locus recombination fractions under some natural inequality restrictions

    Directory of Open Access Journals (Sweden)

    Guo Jianhua

    2008-01-01

    Full Text Available Abstract Background The goal of linkage analysis is to determine the chromosomal location of the gene(s for a trait of interest such as a common disease. Three-locus linkage analysis is an important case of multi-locus problems. Solutions can be found analytically for the case of triple backcross mating. However, in the present study of linkage analysis and gene mapping some natural inequality restrictions on parameters have not been considered sufficiently, when the maximum likelihood estimates (MLEs of the two-locus recombination fractions are calculated. Results In this paper, we present a study of estimating the two-locus recombination fractions for the phase-unknown triple backcross with two offspring in each family in the framework of some natural and necessary parameter restrictions. A restricted expectation-maximization (EM algorithm, called REM is developed. We also consider some extensions in which the proposed REM can be taken as a unified method. Conclusion Our simulation work suggests that the REM performs well in the estimation of recombination fractions and outperforms current method. We apply the proposed method to a published data set of mouse backcross families.

  8. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  9. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations.

    Science.gov (United States)

    Lee, Tai-Sung; Radak, Brian K; Pabis, Anna; York, Darrin M

    2013-01-08

    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem.

  10. The optical synthetic aperture image restoration based on the improved maximum-likelihood algorithm

    Science.gov (United States)

    Geng, Zexun; Xu, Qing; Zhang, Baoming; Gong, Zhihui

    2012-09-01

    Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML) algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the average contrast evaluation indexes.

  11. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  12. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    Science.gov (United States)

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.

  13. Maximum Likelihood Fitting of Tidal Streams With Application to the Sagittarius Dwarf Tidal Tails

    CERN Document Server

    Cole, Nathan; Magdon-Ismail, Malik; Desell, Travis; Dawsey, Kristopher; Hayashi, Warren; Xinyang,; Liu,; Purnell, Jonathan; Szymanski, Boleslaw; Varela, Carlos; Willett, Benjamin; Wisniewski, James

    2008-01-01

    We present a maximum likelihood method for determining the spatial properties of tidal debris and of the Galactic spheroid. With this method we characterize Sagittarius debris using stars with the colors of blue F turnoff stars in SDSS stripe 82. The debris is located at (alpha, delta, R) = (31.37 deg +/- 0.26 deg, 0.0 deg, 29.22 +/- 0.20 kpc), with a (spatial) direction given by the unit vector , in Galactocentric Cartesian coordinates, and with FWHM = 6.74 +/- 0.06 kpc. This 2.5 degee-wide stripe contains 0.892% as many F turnoff stars as the current Sagittarius dwarf galaxy. Over small spatial extent, the debris is modeled as a cylinder with a density that falls off as a Gaussian with distance from the axis, while the smooth component of the spheroid is modeled with a Hernquist profile. We assume that the absolute magnitude of F turnoff stars is distributed as a Gaussian, which is an improvement over previous methods which fixed the absolute magnitude at Mg0 = 4.2. The effectiveness and correctness of the ...

  14. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    Science.gov (United States)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  15. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  16. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    Science.gov (United States)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  17. MLGA: A SAS Macro to Compute Maximum Likelihood Estimators via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Francisco Juretig

    2015-08-01

    Full Text Available Nonlinear regression is usually implemented in SAS either by using PROC NLIN or PROC NLMIXED. Apart from the model structure, initial values need to be specified for each parameter. And after some convergence criteria are fulfilled, the second order conditions need to be analyzed. But numerical problems are expected to appear in case the likelihood is nearly discontinuous, has plateaus, multiple maxima, or the initial values are distant from the true parameter estimates. The usual solution consists of using a grid, and then choosing the set of parameters reporting the highest log-likelihood. However, if the amount of parameters or grid points is large, the computational burden will be excessive. Furthermore, there is no guarantee that, as the number of grid points increases, an equal or better set of points will be found. Genetic algorithms can overcome these problems by replicating how nature optimizes its processes. The MLGA macro is presented; it solves a maximum likelihood estimation problem under normality through PROC GA, and the resulting values can later be used as the starting values in SAS nonlinear procedures. As will be demonstrated, this macro can avoid the usual trial and error approach that is needed when convergence problems arise. Finally, it will be shown how this macro can deal with complicated restrictions involving multiple parameters.

  18. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    Science.gov (United States)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  19. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Esra Saatci

    2010-01-01

    Full Text Available We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC and the widely used linear Resistance-Inductance-Capacitance (RIC models of the respiratory system by Maximum Likelihood Estimator (MLE. The measurement noise is assumed to be Generalized Gaussian Distributed (GGD, and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  20. Optimization of the Maximum Likelihood Estimator for Determining the Intrinsic Dimensionality of High–Dimensional Data

    Directory of Open Access Journals (Sweden)

    Karbauskaitė Rasa

    2015-12-01

    Full Text Available One of the problems in the analysis of the set of images of a moving object is to evaluate the degree of freedom of motion and the angle of rotation. Here the intrinsic dimensionality of multidimensional data, characterizing the set of images, can be used. Usually, the image may be represented by a high-dimensional point whose dimensionality depends on the number of pixels in the image. The knowledge of the intrinsic dimensionality of a data set is very useful information in exploratory data analysis, because it is possible to reduce the dimensionality of the data without losing much information. In this paper, the maximum likelihood estimator (MLE of the intrinsic dimensionality is explored experimentally. In contrast to the previous works, the radius of a hypersphere, which covers neighbours of the analysed points, is fixed instead of the number of the nearest neighbours in the MLE. A way of choosing the radius in this method is proposed. We explore which metric—Euclidean or geodesic—must be evaluated in the MLE algorithm in order to get the true estimate of the intrinsic dimensionality. The MLE method is examined using a number of artificial and real (images data sets.

  1. Comparing Bayesian and Maximum Likelihood Predictors in Structural Equation Modeling of Children’s Lifestyle Index

    Directory of Open Access Journals (Sweden)

    Che Wan Jasimah bt Wan Mohamed Radzi

    2016-11-01

    Full Text Available Several factors may influence children’s lifestyle. The main purpose of this study is to introduce a children’s lifestyle index framework and model it based on structural equation modeling (SEM with Maximum likelihood (ML and Bayesian predictors. This framework includes parental socioeconomic status, household food security, parental lifestyle, and children’s lifestyle. The sample for this study involves 452 volunteer Chinese families with children 7–12 years old. The experimental results are compared in terms of root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error metrics. An analysis of the proposed causal model suggests there are multiple significant interconnections among the variables of interest. According to both Bayesian and ML techniques, the proposed framework illustrates that parental socioeconomic status and parental lifestyle strongly impact children’s lifestyle. The impact of household food security on children’s lifestyle is rejected. However, there is a strong relationship between household food security and both parental socioeconomic status and parental lifestyle. Moreover, the outputs illustrate that the Bayesian prediction model has a good fit with the data, unlike the ML approach. The reasons for this discrepancy between ML and Bayesian prediction are debated and potential advantages and caveats with the application of the Bayesian approach in future studies are discussed.

  2. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  3. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.

    Science.gov (United States)

    Meyer, Karin

    2016-08-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.

  4. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.

  5. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    ZHANG SanGuo; LIAO Yuan

    2008-01-01

    In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.

  6. Conditional maximum likelihood estimation in semiparametric transformation model with LTRC data.

    Science.gov (United States)

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-02-06

    Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.

  7. Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-motion

    Directory of Open Access Journals (Sweden)

    Huajun Liu

    2016-01-01

    Full Text Available This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.

  8. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  9. Performance of MIMO-OFDM system using Linear Maximum Likelihood Alamouti Decoder

    Directory of Open Access Journals (Sweden)

    Monika Aggarwal

    2012-06-01

    Full Text Available A MIMO-OFDM wireless communication system is a combination of MIMO and OFDM Technology. The combination of MIMO and OFDM produces a powerful technique for providing high data rates over frequency-selective fading channels. MIMO-OFDM system has been currently recognized as one of the most competitive technology for 4G mobile wireless systems. MIMO-OFDM system can compensate for the lacks of MIMO systems and give play to the advantages of OFDM system.In this paper , the bit error rate (BER performance using linear maximum likelihood alamouti combiner (LMLAC decoding technique for space time frequency block codes(STFBC MIMO-OFDM system with frequency offset (FO is being evaluated to provide the system with low complexity and maximum diversity. The simulation results showed that the scheme has the ability to reduce ICI effectively with a low decoding complexity and maximum diversity in terms of bandwidth efficiency and also in the bit error rate (BER performance especially at high signal to noise ratio.

  10. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    Directory of Open Access Journals (Sweden)

    Leandro de Jesus Benevides

    Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.

  11. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    Directory of Open Access Journals (Sweden)

    Kirkpatrick Mark

    2005-01-01

    Full Text Available Abstract Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1/2 to m(2k - m + 1/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given.

  12. Application of Artificial Bee Colony Algorithm to Maximum Likelihood DOA Estimation

    Institute of Scientific and Technical Information of China (English)

    Zhicheng Zhang; Jun Lin; Yaowu Shi

    2013-01-01

    Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation,but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use.To reduce the high computational burden of ML method and make it more suitable to engineering applications,we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation.As a recently proposed bio-inspired computing algorithm,ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment.It offers an excellent alternative to the conventional methods in ML-DOA estimation.The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence,Signal-to-Noise Ratio (SNR),and number of iterations.The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated.Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.

  13. Initial application of the maximum likelihood earthquake location method to early warning system in South Korea

    Science.gov (United States)

    Sheen, D. H.; Seong, Y. J.; Park, J. H.; Lim, I. S.

    2015-12-01

    From the early of this year, the Korea Meteorological Administration (KMA) began to operate the first stage of an earthquake early warning system (EEWS) and provide early warning information to the general public. The earthquake early warning system (EEWS) in the KMA is based on the Earthquake Alarm Systems version 2 (ElarmS-2), developed at the University of California Berkeley. This method estimates the earthquake location using a simple grid search algorithm that finds the location with the minimum variance of the origin time on successively finer grids. A robust maximum likelihood earthquake location (MAXEL) method for early warning, based on the equal differential times of P arrivals, was recently developed. The MAXEL has been demonstrated to be successful in determining the event location, even when an outlier is included in the small number of P arrivals. This presentation details the application of the MAXEL to the EEWS of the KMA, its performance evaluation over seismic networks in South Korea with synthetic data, and comparison of statistics of earthquake locations based on the ElarmS-2 and the MAXEL.

  14. Adapting Predictive Models for Cepheid Variable Star Classification Using Linear Regression and Maximum Likelihood

    Science.gov (United States)

    Gupta, Kinjal Dhar; Vilalta, Ricardo; Asadourian, Vicken; Macri, Lucas

    2014-05-01

    We describe an approach to automate the classification of Cepheid variable stars into two subtypes according to their pulsation mode. Automating such classification is relevant to obtain a precise determination of distances to nearby galaxies, which in addition helps reduce the uncertainty in the current expansion of the universe. One main difficulty lies in the compatibility of models trained using different galaxy datasets; a model trained using a training dataset may be ineffectual on a testing set. A solution to such difficulty is to adapt predictive models across domains; this is necessary when the training and testing sets do not follow the same distribution. The gist of our methodology is to train a predictive model on a nearby galaxy (e.g., Large Magellanic Cloud), followed by a model-adaptation step to make the model operable on other nearby galaxies. We follow a parametric approach to density estimation by modeling the training data (anchor galaxy) using a mixture of linear models. We then use maximum likelihood to compute the right amount of variable displacement, until the testing data closely overlaps the training data. At that point, the model can be directly used in the testing data (target galaxy).

  15. Maximum likelihood estimator of operational modal analysis for linear time-varying structures in time-frequency domain

    Science.gov (United States)

    Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li

    2014-05-01

    This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.

  16. Applying manifold learning to plotting approximate contour trees.

    Science.gov (United States)

    Takahashi, Shigeo; Fujishiro, Issei; Okada, Masato

    2009-01-01

    A contour tree is a powerful tool for delineating the topological evolution of isosurfaces of a single-valued function, and thus has been frequently used as a means of extracting features from volumes and their time-varying behaviors. Several sophisticated algorithms have been proposed for constructing contour trees while they often complicate the software implementation especially for higher-dimensional cases such as time-varying volumes. This paper presents a simple yet effective approach to plotting in 3D space, approximate contour trees from a set of scattered samples embedded in the high-dimensional space. Our main idea is to take advantage of manifold learning so that we can elongate the distribution of high-dimensional data samples to embed it into a low-dimensional space while respecting its local proximity of sample points. The contribution of this paper lies in the introduction of new distance metrics to manifold learning, which allows us to reformulate existing algorithms as a variant of currently available dimensionality reduction scheme. Efficient reduction of data sizes together with segmentation capability is also developed to equip our approach with a coarse-to-fine analysis even for large-scale datasets. Examples are provided to demonstrate that our proposed scheme can successfully traverse the features of volumes and their temporal behaviors through the constructed contour trees.

  17. Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.

    Science.gov (United States)

    Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A

    2009-06-01

    We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

  18. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  19. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    Science.gov (United States)

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  20. Predicting bulk permeability using outcrop fracture attributes: The benefits of a Maximum Likelihood Estimator

    Science.gov (United States)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2015-12-01

    The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.

  1. Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach.

    Science.gov (United States)

    Várnai, Csilla; Burkoff, Nikolas S; Wild, David L

    2013-12-10

    Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/.

  2. Performance comparison of various maximum likelihood nonlinear mixed-effects estimation methods for dose-response models.

    Science.gov (United States)

    Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie

    2012-09-01

    Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.

  3. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1992-01-01

    In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual cou

  4. Selection properties of Type II maximum likelihood (empirical bayes) linear models with individual variance components for predictors

    NARCIS (Netherlands)

    Jamil, T.; Braak, ter C.J.F.

    2012-01-01

    Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the number of objects (N). One of the possible solution is the Relevance vector machine (RVM) which is a form of automatic relevance detection and has gained popularity in the pattern recognition machine l

  5. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik;

    2016-01-01

    This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms...

  6. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at lo...

  7. An efficient implementation of maximum likelihood identification of LTI state-space models by local gradient search

    NARCIS (Netherlands)

    Bergboer, N.H; Verdult, V.; Verhaegen, M.H.G.

    2002-01-01

    We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting parame

  8. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  9. Joint and Conditional Maximum Likelihood Estimation for the Rasch Model for Binary Responses. Research Report. RR-04-20

    Science.gov (United States)

    Haberman, Shelby J.

    2004-01-01

    The usefulness of joint and conditional maximum-likelihood is considered for the Rasch model under realistic testing conditions in which the number of examinees is very large and the number is items is relatively large. Conditions for consistency and asymptotic normality are explored, effects of model error are investigated, measures of prediction…

  10. On the Sampling Interpretation of Confidence Intervals and Hypothesis Tests in the Context of Conditional Maximum Likelihood Estimation.

    Science.gov (United States)

    Maris, E.

    1998-01-01

    The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…

  11. Approximation Algorithms for Optimal Decision Trees and Adaptive TSP Problems

    CERN Document Server

    Gupta, Anupam; Nagarajan, Viswanath; Ravi, R

    2010-01-01

    We consider the problem of constructing optimal decision trees: given a collection of tests which can disambiguate between a set of $m$ possible diseases, each test having a cost, and the a-priori likelihood of the patient having any particular disease, what is a good adaptive strategy to perform these tests to minimize the expected cost to identify the disease? We settle the approximability of this problem by giving a tight $O(\\log m)$-approximation algorithm. We also consider a more substantial generalization, the Adaptive TSP problem. Given an underlying metric space, a random subset $S$ of cities is drawn from a known distribution, but $S$ is initially unknown to us--we get information about whether any city is in $S$ only when we visit the city in question. What is a good adaptive way of visiting all the cities in the random subset $S$ while minimizing the expected distance traveled? For this problem, we give the first poly-logarithmic approximation, and show that this algorithm is best possible unless w...

  12. Maximum Likelihood Signal Extraction Method Applied to 3.4 years of CoGeNT Data

    CERN Document Server

    Aalseth, C E; Colaresi, J; Collar, J I; Leon, J Diaz; Fast, J E; Fields, N E; Hossbach, T W; Knecht, A; Kos, M S; Marino, M G; Miley, H S; Miller, M L; Orrell, J L; Yocum, K M

    2014-01-01

    CoGeNT has taken data for over 3 years, with 1136 live days of data accumulated as of April 23, 2013. We report on the results of a maximum likelihood analysis to extract any possible dark matter signal present in the collected data. The maximum likelihood signal extraction uses 2-dimensional probability density functions (PDFs) to characterize the anticipated variations in dark matter interaction rates for given observable nuclear recoil energies during differing periods of the Earth's annual orbit around the Sun. Cosmogenic and primordial radioactivity backgrounds are characterized by their energy signatures and in some cases decay half-lives. A third parameterizing variable -- pulse rise-time -- is added to the likelihood analysis to characterize slow rising pulses described in prior analyses. The contribution to each event category is analyzed for various dark matter signal hypotheses including a dark matter standard halo model and a case with free oscillation parameters (i.e., amplitude, period, and phas...

  13. Estimation of the Unextendable Dead Time Period in a Flow of Physical Events by the Method of Maximum Likelihood

    Science.gov (United States)

    Nezhel'skaya, L. A.

    2016-09-01

    A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.

  14. Regions of constrained maximum likelihood parameter identifiability. [of discrete-time nonlinear dynamic systems with white measurement errors

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1976-01-01

    This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.

  15. A maximum likelihood method for studying gene-environment interactions under conditional independence of genotype and exposure.

    Science.gov (United States)

    Cheng, K F

    2006-09-30

    Given the biomedical interest in gene-environment interactions along with the difficulties inherent in gathering genetic data from controls, epidemiologists need methodologies that can increase precision of estimating interactions while minimizing the genotyping of controls. To achieve this purpose, many epidemiologists suggested that one can use case-only design. In this paper, we present a maximum likelihood method for making inference about gene-environment interactions using case-only data. The probability of disease development is described by a logistic risk model. Thus the interactions are model parameters measuring the departure of joint effects of exposure and genotype from multiplicative odds ratios. We extend the typical inference method derived under the assumption of independence between genotype and exposure to that under a more general assumption of conditional independence. Our maximum likelihood method can be applied to analyse both categorical and continuous environmental factors, and generalized to make inference about gene-gene-environment interactions. Moreover, the application of this method can be reduced to simply fitting a multinomial logistic model when we have case-only data. As a consequence, the maximum likelihood estimates of interactions and likelihood ratio tests for hypotheses concerning interactions can be easily computed. The methodology is illustrated through an example based on a study about the joint effects of XRCC1 polymorphisms and smoking on bladder cancer. We also give two simulation studies to show that the proposed method is reliable in finite sample situation.

  16. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm.

    Science.gov (United States)

    1982-09-01

    defined.) Prior knowledge of the approximate frequency band allows for pretest seleccion of the bandpass filter passband, which in turn permits...covariance of the parameter es-imate corrected for nonwhite innovations e natural logrithm basis, e = 2.71828 Fi generalized force scaled by generalized...process noise and measurement noise respectively ac standard deviation of damping parameter estimate W frequency, rad/sec. Wn, undamped natural frequency

  17. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    Science.gov (United States)

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  18. Sample Size Determination Within the Scope of Conditional Maximum Likelihood Estimation with Special Focus on Testing the Rasch Model.

    Science.gov (United States)

    Draxler, Clemens; Alexandrowicz, Rainer W

    2015-12-01

    This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.

  19. Maximum Likelihood Expectation-Maximization Algorithms Applied to Localization and Identification of Radioactive Sources with Recent Coded Mask Gamma Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Lemaire, H.; Barat, E.; Carrel, F.; Dautremer, T.; Dubos, S.; Limousin, O.; Montagu, T.; Normand, S.; Schoepff, V. [CEA, Gif-sur-Yvette, F-91191 (France); Amgarou, K.; Menaa, N. [CANBERRA, 1, rue des Herons, Saint Quentin en Yvelines, F-78182 (France); Angelique, J.-C. [LPC, 6, boulevard du Marechal Juin, F-14050 (France); Patoz, A. [CANBERRA, 10, route de Vauzelles, Loches, F-37600 (France)

    2015-07-01

    In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)

  20. A polylogarithmic approximation algorithm for group Steiner tree problem

    OpenAIRE

    N. Garg; Konjevod, G.; Ravi, R.

    1997-01-01

    The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and finds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized $O(\\log^3 n \\log k)$...

  1. Maximum likelihood estimates with order restrictions on probabilities and odds ratios: A geometric programming approach

    Directory of Open Access Journals (Sweden)

    D. L. Bricker

    1997-01-01

    Full Text Available The problem of assigning cell probabilities to maximize a multinomial likelihood with order restrictions on the probabilies and/or restrictions on the local odds ratios is modeled as a posynomial geometric program (GP, a class of nonlinear optimization problems with a well-developed duality theory and collection of algorithms. (Local odds ratios provide a measure of association between categorical random variables. A constrained multinomial MLE example from the literature is solved, and the quality of the solution is compared with that obtained by the iterative method of El Barmi and Dykstra, which is based upon Fenchel duality. Exploiting the proximity of the GP model of MLE problems to linear programming (LP problems, we also describe as an alternative, in the absence of special-purpose GP software, an easily implemented successive LP approximation method for solving this class of MLE problems using one of the readily available LP solvers.

  2. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  3. Maximum likelihood estimation of the negative binomial dispersion parameter for highly overdispersed data, with applications to infectious diseases.

    Directory of Open Access Journals (Sweden)

    James O Lloyd-Smith

    Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.

  4. Tree-space statistics and approximations for large-scale analysis of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Owen, Megan; Petersen, Jens

    2013-01-01

    onto tree-space is not available. Using tree-space and its shortest paths, a variety of statistical properties, such as mean, principal component, hypothesis testing and linear discriminant analysis can be defined. For some of these properties it is still an open problem how to compute them; others......Statistical analysis of anatomical trees is hard to perform due to differences in the topological structure of the trees. In this paper we define statistical properties of leaf-labeled anatomical trees with geometric edge attributes by considering the anatomical trees as points in the geometric...... space of leaf-labeled trees. This tree-space is a geodesic metric space where any two trees are connected by a unique shortest path, which corresponds to a tree deformation. However, tree-space is not a manifold, and the usual strategy of performing statistical analysis in a tangent space and projecting...

  5. Tree-space statistics and approximations for large-scale analysis of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Owen, Megan; Petersen, Jens;

    2013-01-01

    Statistical analysis of anatomical trees is hard to perform due to differences in the topological structure of the trees. In this paper we define statistical properties of leaf-labeled anatomical trees with geometric edge attributes by considering the anatomical trees as points in the geometric s...... healthy ones. Software is available from http://image.diku.dk/aasa/software.php....

  6. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  7. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  8. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  9. Design optimization of the proximity focusing RICH with dual aerogel radiator using a maximum-likelihood analysis of Cherenkov rings

    Science.gov (United States)

    Pestotnik, R.; Križan, P.; Korpar, S.; Iijima, T.

    2008-09-01

    The use of a sequence of aerogel radiators with different refractive indices in a proximity focusing Cherenkov ring imaging detector has been shown to improve the resolution of the Cherenkov angle. In order to obtain further information on the capabilities of such a detector, a maximum-likelihood analysis has been performed on simulated data, with the simulation being appropriate for the upgraded Belle detector. The results show that by using a sequence of two aerogel layers with different refractive indices, the K/π separation efficiency is improved in the kinematic region above 3 GeV/ c. In the low momentum region, the focusing configuration (with n1 and n2 chosen such that the Cherenkov rings from different aerogel layers at 4 GeV/ c overlap) shows a better performance than the defocusing one (where the two Cherenkov rings are well separated).

  10. Resolution and signal-to-noise ratio improvement in confocal fluorescence microscopy using array detection and maximum-likelihood processing

    Science.gov (United States)

    Kakade, Rohan; Walker, John G.; Phillips, Andrew J.

    2016-08-01

    Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.

  11. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  12. MAXIMUM LIKELIHOOD SOURCE SEPARATION FOR FINITE IMPULSE RESPONSE MULTIPLE INPUT—MULTIPLE OUTPUT CHANNELS IN THE PRESENCE OF ADDITIVE NOISE

    Institute of Scientific and Technical Information of China (English)

    AaziTakpaya; WeiGang

    2003-01-01

    Blind identification-blind equalization for finite Impulse Response(FIR)Multiple Input-Multiple Output(MIMO)channels can be reformulated as the problem of blind sources separation.It has been shown that blind identification via decorrelating sub-channels method could recover the input sources.The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators,which decorrelate the output signals of subchannels,and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix.In this paper,a new qpproximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed.The proposed method outperforms BIDS in the presence of additive white Garssian noise.

  13. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    Science.gov (United States)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  14. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  15. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547

  16. Maximum Likelihood Fusion Model

    Science.gov (United States)

    2014-08-09

    by the DLR Institute of Robotics and Mechatronics building (dataset courtesy of the University of Bre- men). In contrast to the Victoria Park dataset...Institute of Robotics and Mechatronics building (dataset courtesy of the University of Bremen). In contrast to the Victoria Park dataset, a camera sensor is

  17. Terrain Classification on Venus from Maximum-Likelihood Inversion of Parameterized Models of Topography, Gravity, and their Relation

    Science.gov (United States)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.

    2013-12-01

    Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of

  18. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  19. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement.

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-06-16

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  20. Galaxy and Mass Assembly (GAMA): The halo mass of galaxy groups from maximum-likelihood weak lensing

    CERN Document Server

    Han, Jiaxin; Frenk, Carlos S; Mandelbaum, Rachel; Norberg, Peder; Schneider, Michael D; Peacock, John A; Jing, Yipeng; Baldry, Ivan; Bland-Hawthorn, Joss; Brough, Sarah; Brown, Michael J I; Loveday, Jon

    2014-01-01

    We present a maximum-likelihood weak lensing analysis of the mass distribution in optically selected spectroscopic Galaxy Groups (G3Cv1) in the Galaxy And Mass Assembly (GAMA) survey, using background Sloan Digital Sky Survey (SDSS) photometric galaxies. The scaling of halo mass, $M_h$, with various group observables is investigated. Our main results are: 1) the measured relations of halo mass with group luminosity, virial volume and central galaxy stellar mass, $M_\\star$, agree very well with predictions from mock group catalogues constructed from a GALFORM semi-analytical galaxy formation model implemented in the Millennim $\\Lambda$CDM N-body simulation; 2) the measured relations of halo mass with velocity dispersion and projected half-abundance radius show weak tension with mock predictions, hinting at problems in the mock galaxy dynamics and their small scale distribution; 3) the median $M_h|M_\\star$ measured from weak lensing depends more sensitively on the dispersion in $M_\\star$ at fixed $M_h$ than it ...

  1. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  2. A topological restricted maximum likelihood (TopREML approach to regionalize trended runoff signatures in stream networks

    Directory of Open Access Journals (Sweden)

    M. F. Müller

    2015-01-01

    Full Text Available We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML framework generates the best linear unbiased predictor (BLUP of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged and Austria (densely gauged, where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  3. Adaptive Population Sizing Genetic Algorithm Assisted Maximum Likelihood Detection of OFDM Symbols in the Presence of Nonlinear Distortions

    Directory of Open Access Journals (Sweden)

    K. Seshadri Sastry

    2013-06-01

    Full Text Available This paper presents Adaptive Population Sizing Genetic Algorithm (AGA assisted Maximum Likelihood (ML estimation of Orthogonal Frequency Division Multiplexing (OFDM symbols in the presence of Nonlinear Distortions. The proposed algorithm is simulated in MATLAB and compared with existing estimation algorithms such as iterative DAR, decision feedback clipping removal, iteration decoder, Genetic Algorithm (GA assisted ML estimation and theoretical ML estimation. Simulation results proved that the performance of the proposed AGA assisted ML estimation algorithm is superior compared with the existing estimation algorithms. Further the computational complexity of GA assisted ML estimation increases with increase in number of generations or/and size of population, in the proposed AGA assisted ML estimation algorithm the population size is adaptive and depends on the best fitness. The population size in GA assisted ML estimation is fixed and sufficiently higher size of population is taken to ensure good performance of the algorithm but in proposed AGA assisted ML estimation algorithm the size of population changes as per requirement in an adaptive manner thus reducing the complexity of the algorithm.

  4. FPGA-Based Implementation of All-Digital QPSK Carrier Recovery Loop Combining Costas Loop and Maximum Likelihood Frequency Estimator

    Directory of Open Access Journals (Sweden)

    Kaiyu Wang

    2014-01-01

    Full Text Available This paper presents an efficient all digital carrier recovery loop (ADCRL for quadrature phase shift keying (QPSK. The ADCRL combines classic closed-loop carrier recovery circuit, all digital Costas loop (ADCOL, with frequency feedward loop, maximum likelihood frequency estimator (MLFE so as to make the best use of the advantages of the two types of carrier recovery loops and obtain a more robust performance in the procedure of carrier recovery. Besides, considering that, for MLFE, the accurate estimation of frequency offset is associated with the linear characteristic of its frequency discriminator (FD, the Coordinate Rotation Digital Computer (CORDIC algorithm is introduced into the FD based on MLFE to unwrap linearly phase difference. The frequency offset contained within the phase difference unwrapped is estimated by the MLFE implemented just using some shifter and multiply-accumulate units to assist the ADCOL to lock quickly and precisely. The joint simulation results of ModelSim and MATLAB show that the performances of the proposed ADCRL in locked-in time and range are superior to those of the ADCOL. On the other hand, a systematic design procedure based on FPGA for the proposed ADCRL is also presented.

  5. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus

    Science.gov (United States)

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-01-01

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579

  6. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus.

    Science.gov (United States)

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-04-20

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.

  7. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  8. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources.

    Science.gov (United States)

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J

    2016-03-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.

  9. Error Rates of the Maximum-Likelihood Detector for Arbitrary Constellations: Convex/Concave Behavior and Applications

    CERN Document Server

    Loyka, Sergey; Gagnon, Francois

    2009-01-01

    Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...

  10. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi

    2014-10-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.

  11. Experimental demonstration of a digital maximum likelihood based feedforward carrier recovery scheme for phase-modulated radio-over-fibre links

    DEFF Research Database (Denmark)

    Guerrero Gonzalez, Neil; Zibar, Darko; Yu, Xianbin

    2008-01-01

    Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission.......Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission....

  12. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  13. Maximum Likelihood SNR Estimation of Linearly-Modulated Signals Over Time-Varying Flat-Fading SIMO Channels

    Science.gov (United States)

    Bellili, Faouzi; Meftehi, Rabii; Affes, Sofiene; Stephenne, Alex

    2015-01-01

    In this paper, we tackle for the first time the problem of maximum likelihood (ML) estimation of the signal-to-noise ratio (SNR) parameter over time-varying single-input multiple-output (SIMO) channels. Both the data-aided (DA) and the non-data-aided (NDA) schemes are investigated. Unlike classical techniques where the channel is assumed to be slowly time-varying and, therefore, considered as constant over the entire observation period, we address the more challenging problem of instantaneous (i.e., short-term or local) SNR estimation over fast time-varying channels. The channel variations are tracked locally using a polynomial-in-time expansion. First, we derive in closed-form expressions the DA ML estimator and its bias. The latter is subsequently subtracted in order to obtain a new unbiased DA estimator whose variance and the corresponding Cram\\'er-Rao lower bound (CRLB) are also derived in closed form. Due to the extreme nonlinearity of the log-likelihood function (LLF) in the NDA case, we resort to the expectation-maximization (EM) technique to iteratively obtain the exact NDA ML SNR estimates within very few iterations. Most remarkably, the new EM-based NDA estimator is applicable to any linearly-modulated signal and provides sufficiently accurate soft estimates (i.e., soft detection) for each of the unknown transmitted symbols. Therefore, hard detection can be easily embedded in the iteration loop in order to improve its performance at low to moderate SNR levels. We show by extensive computer simulations that the new estimators are able to accurately estimate the instantaneous per-antenna SNRs as they coincide with the DA CRLB over a wide range of practical SNRs.

  14. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  15. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    Science.gov (United States)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  16. Improving soil moisture profile reconstruction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    Directory of Open Access Journals (Sweden)

    A. P. Tran

    2013-07-01

    Full Text Available The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  17. 关于极大似然算法的辨识问题%identification problem about maximum likelihood algorithm

    Institute of Scientific and Technical Information of China (English)

    徐敏

    2013-01-01

    传统控制系统的被控对象多是考虑到线性时不变系统,由于工业的高速发展,控制系统面临着巨大的变化,被控对象呈现出非线性、时变、时延和外界干扰的系统,并且系统的模型不容易被确定,因此我们首先必须对系统进行辨识,确定系统的模型,才能进行有效地控制。本篇文章利用极大似然算法对非线性系统进行辨识,然后给出辨识的方法和步骤,最后辨识的仿真结果。%The controlled object of the traditional control systems more likely considered the linear time-invariant systems. Due to the rapid development of industy, control systems facing great change, the controlled object showed a nonlinear, time-varying ,delay and outside disturbance system and the system model is not easily determined, thus,we must firstly identify the system to determine the model of the system in order to effectively control. This article using the maximum likelihood algorithm for nonlinear system identification, and then gives identification methods and procedures, the final identification of the simulation results.

  18. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    Science.gov (United States)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  19. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Science.gov (United States)

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  20. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  1. Approximation Algorithms for Optimization Problems in Graphs with Super logarithmic Tree width

    DEFF Research Database (Denmark)

    Czumaj, Artur; Halldorsson, Marcús Mar; Lingas, Andrzej;

    2005-01-01

    We present a generic scheme for approximating NP-hard problems on graphs of treewidth k=ω(logn) . When a tree-decomposition of width ℓ is given, the scheme typically yields an ℓ/logn -approximation factor; otherwise, an extra logk factor is incurred. Our method applies to several basic subgraph...... and partitioning problems, including the maximum independent set problem....

  2. Maximum likelihood channel estimation based on nonlinear filter%基于非线性滤波器的最大似然信道估计

    Institute of Scientific and Technical Information of China (English)

    沈壁川; 郑建宏; 申敏

    2008-01-01

    For long finite channel impulse response,accurate maximum likelihood channel estimation is computationally high cost due to high dimension of parameter space,and approximate approaches are usually adopted.By utilizing the suppression of noise and extraction of signal of the nonlinear Teager-Kaiser filter,a likelihood ratio of channel estimation is defined to represent the probability distribution of ehannel parameters.Maximization of this likelihood funetion 1eads to initially searching the extrema of path delays and then the complex attenuation.Computer simulation iS conducted and the results show periormance improvements of ioint detection as compared to the non-likelihood approach.%在有限信道冲激响应较长的情况,由于待估计参数空间的高维数,准确计算最大似然信道估计的复杂度较高,在实际应用中通常采用近似的方法.利用非线性Teager-Kaiser滤波器在抑制噪声的同时可以有效提取信号的特征,定义了一个表征信道参数概率分布的似然比,对该似然函数的最大化是首先得到路径延迟的极值,然后求得复路径衰耗.计算机仿真结果表明,与非似然方法相比,采用该似然函数方法能使联合检测性能得到提高.

  3. Modeling the impact of hepatitis C viral clearance on end-stage liver disease in an HIV co-infected cohort with targeted maximum likelihood estimation.

    Science.gov (United States)

    Schnitzer, Mireille E; Moodie, Erica E M; van der Laan, Mark J; Platt, Robert W; Klein, Marina B

    2014-03-01

    Despite modern effective HIV treatment, hepatitis C virus (HCV) co-infection is associated with a high risk of progression to end-stage liver disease (ESLD) which has emerged as the primary cause of death in this population. Clinical interest lies in determining the impact of clearance of HCV on risk for ESLD. In this case study, we examine whether HCV clearance affects risk of ESLD using data from the multicenter Canadian Co-infection Cohort Study. Complications in this survival analysis arise from the time-dependent nature of the data, the presence of baseline confounders, loss to follow-up, and confounders that change over time, all of which can obscure the causal effect of interest. Additional challenges included non-censoring variable missingness and event sparsity. In order to efficiently estimate the ESLD-free survival probabilities under a specific history of HCV clearance, we demonstrate the double-robust and semiparametric efficient method of Targeted Maximum Likelihood Estimation (TMLE). Marginal structural models (MSM) can be used to model the effect of viral clearance (expressed as a hazard ratio) on ESLD-free survival and we demonstrate a way to estimate the parameters of a logistic model for the hazard function with TMLE. We show the theoretical derivation of the efficient influence curves for the parameters of two different MSMs and how they can be used to produce variance approximations for parameter estimates. Finally, the data analysis evaluating the impact of HCV on ESLD was undertaken using multiple imputations to account for the non-monotone missing data.

  4. On a Berry-Esseen type bound for the maximum likelihood estimator of a parameter for some stochastic partial differential equations

    Directory of Open Access Journals (Sweden)

    M. N. Mishra

    2004-01-01

    Full Text Available This paper is concerned with the study of the rate of convergence of the distribution of the maximum likelihood estimator of a parameter appearing linearly in the drift coefficients of two types of stochastic partial differential equations (SPDEs.

  5. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    Science.gov (United States)

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  6. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  7. A Combined Maximum-likelihood Analysis of the High-energy Astrophysical Neutrino Flux Measured with IceCube

    Science.gov (United States)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Brown, A. M.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Christy, B.; Clark, K.; Classen, L.; Coenders, S.; Cowen, D. F.; Cruz Silva, A. H.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; Dumm, J. P.; Dunkman, M.; Eagan, R.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fahey, S.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Fuchs, T.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glagla, M.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Goodman, J. A.; Góra, D.; Grant, D.; Gretskov, P.; Groh, J. C.; Gross, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hellwig, D.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jero, K.; Jurkovic, M.; Kaminsky, B.; Kappes, A.; Karg, T.; Karle, A.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Kläs, J.; Klein, S. R.; Kohnen, G.; Kolanoski, H.; Konietz, R.; Koob, A.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Middlemas, E.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Pütz, J.; Quinnan, M.; Rädel, L.; Rameez, M.; Rawlins, K.; Redl, P.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Schatto, K.; Scheriau, F.; Schimp, M.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schukraft, A.; Schulte, L.; Seckel, D.; Seunarine, S.; Shanidze, R.; Smith, M. W. E.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stanisha, N. A.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Strahler, E. A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Tosi, D.; Tselengidou, M.; Unger, E.; Usner, M.; Vallecorsa, S.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Whitehorn, N.; Wichary, C.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.; IceCube Collaboration

    2015-08-01

    Evidence for an extraterrestrial flux of high-energy neutrinos has now been found in multiple searches with the IceCube detector. The first solid evidence was provided by a search for neutrino events with deposited energies ≳ 30 TeV and interaction vertices inside the instrumented volume. Recent analyses suggest that the extraterrestrial flux extends to lower energies and is also visible with throughgoing, νμ-induced tracks from the Northern Hemisphere. Here, we combine the results from six different IceCube searches for astrophysical neutrinos in a maximum-likelihood analysis. The combined event sample features high-statistics samples of shower-like and track-like events. The data are fit in up to three observables: energy, zenith angle, and event topology. Assuming the astrophysical neutrino flux to be isotropic and to consist of equal flavors at Earth, the all-flavor spectrum with neutrino energies between 25 TeV and 2.8 PeV is well described by an unbroken power law with best-fit spectral index -2.50 ± 0.09 and a flux at 100 TeV of ({6.7}-1.2+1.1)× {10}-18 {{GeV}}-1 {{{s}}}-1 {{sr}}-1 {{cm}}-2. Under the same assumptions, an unbroken power law with index -2 is disfavored with a significance of 3.8σ (p = 0.0066%) with respect to the best fit. This significance is reduced to 2.1σ (p = 1.7%) if instead we compare the best fit to a spectrum with index -2 that has an exponential cut-off at high energies. Allowing the electron-neutrino flux to deviate from the other two flavors, we find a νe fraction of 0.18 ± 0.11 at Earth. The sole production of electron neutrinos, which would be characteristic of neutron-decay-dominated sources, is rejected with a significance of 3.6σ (p = 0.014%).

  8. A maximum likelihood QTL analysis reveals common genome regions controlling resistance to Salmonella colonization and carrier-state

    Directory of Open Access Journals (Sweden)

    Thanh-Son Tran

    2012-05-01

    Full Text Available Abstract Background The serovars Enteritidis and Typhimurium of the Gram-negative bacterium Salmonella enterica are significant causes of human food poisoning. Fowl carrying these bacteria often show no clinical disease, with detection only established post-mortem. Increased resistance to the carrier state in commercial poultry could be a way to improve food safety by reducing the spread of these bacteria in poultry flocks. Previous studies identified QTLs for both resistance to carrier state and resistance to Salmonella colonization in the same White Leghorn inbred lines. Until now, none of the QTLs identified was common to the two types of resistance. All these analyses were performed using the F2 inbred or backcross option of the QTLExpress software based on linear regression. In the present study, QTL analysis was achieved using Maximum Likelihood with QTLMap software, in order to test the effect of the QTL analysis method on QTL detection. We analyzed the same phenotypic and genotypic data as those used in previous studies, which were collected on 378 animals genotyped with 480 genome-wide SNP markers. To enrich these data, we added eleven SNP markers located within QTLs controlling resistance to colonization and we looked for potential candidate genes co-localizing with QTLs. Results In our case the QTL analysis method had an important impact on QTL detection. We were able to identify new genomic regions controlling resistance to carrier-state, in particular by testing the existence of two segregating QTLs. But some of the previously identified QTLs were not confirmed. Interestingly, two QTLs were detected on chromosomes 2 and 3, close to the locations of the major QTLs controlling resistance to colonization and to candidate genes involved in the immune response identified in other, independent studies. Conclusions Due to the lack of stability of the QTLs detected, we suggest that interesting regions for further studies are those that were

  9. Approximation Algorithm for Bottleneck Steiner Tree Problem in the Euclidean Plane

    Institute of Scientific and Technical Information of China (English)

    Zi-Mao Li; Da-Ming Zhu; Shao-Han Ma

    2004-01-01

    A special case of the bottleneck Steiner tree problem in the Euclidean plane was considered in this paper. The problem has applications in the design of wireless communication networks, multifacility location, VLSI routing and network routing. For the special case which requires that there should be no edge connecting any two Steiner points in the optimal solution, a 3-restricted Steiner tree can be found indicating the existence of the performance ratio √2. In this paper, the special case of the problem is proved to be NP-hard and cannot be approximated within ratio √2. First a simple polynomial time approximation algorithm with performance ratio √3 is presented. Then based on this algorithm and the existence of the 3-restricted Steiner tree, a polynomial time approximation algorithm with performance ratio-√2 + ε is proposed, for any ε>0.

  10. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    Science.gov (United States)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  11. 雷达组网的精确极大似然误差配准算法%An Exact Maximum Likelihood Error Registration Algorithm for Radar Network

    Institute of Scientific and Technical Information of China (English)

    丰昌政; 薛强

    2012-01-01

    针对最小二乘法和卡尔曼滤波方法在雷达网系统中的误差配准问题,提出一种雷达组网的精确极大似然误差配准算法.采用基于圆极投影的极大似然配准算法,利用各雷达站的几何关系,通过极大似然混合高斯-牛顿迭代方法估计出雷达网的系统误差,并进行仿真.仿真结果证明:该配准方法具有良好的一致性,可以用于多雷达组网的误差配准.%For the least square method and Caiman filter method in radar network system's error registration problems, put forward a kind of radar netting exact maximum likelihood error registration algorithm. Using maximum likelihood registration algorithm based on circular polar projection, according to the radar station geometric relationship, to estimate the error of radar network system by maximum likelihood mixed Gauss-Newton iterative method, and carried out a simulation. The simulation results show that the algorithm has good compatibility, can be used for multi radar netted registration.

  12. Approximate K-Nearest Neighbour Based Spatial Clustering Using K-D Tree

    Directory of Open Access Journals (Sweden)

    Mohammed Otair

    2013-03-01

    Full Text Available Different spatial objects that vary in their characteristics, such as molecular biology and geography, arepresented in spatial areas. Methods to organize, manage, and maintain those objects in a structuredmanner are required. Data mining raised different techniques to overcome these requirements. There aremany major tasks of data mining, but the mostly used task is clustering. Data set within the same clustershare common features that give each cluster its characteristics. In this paper, an implementation ofApproximate kNN-based spatial clustering algorithm using the K-d tree is proposed. The majorcontribution achieved by this research is the use of the k-d tree data structure for spatial clustering, andcomparing its performance to the brute-force approach. The results of the work performed in this paperrevealed better performance using the k-d tree, compared to the traditional brute-force approach.

  13. Beyond the locally tree-like approximation for percolation on real networks

    CERN Document Server

    Radicchi, Filippo

    2016-01-01

    Theoretical attempts proposed so far to describe ordinary percolation processes on real-world networks rely on the locally tree-like ansatz. Such an approximation, however, holds only to a limited extent, as real graphs are often characterized by high frequencies of short loops. We present here a theoretical framework able to overcome such a limitation for the case of site percolation. Our method is based on a message passing algorithm that discounts redundant paths along triangles in the graph. We systematically test the approach on 98 real-world graphs and on synthetic networks. We find excellent accuracy in the prediction of the whole percolation diagram, with significant improvement with respect to the prediction obtained under the locally tree-like approximation. Residual discrepancies between theory and simulations do not depend on clustering and can be attributed to the presence of loops longer than three edges. We present also a method to account for clustering in bond percolation, but the improvement...

  14. Hierarchical approximate policy iteration with binary-tree state space decomposition.

    Science.gov (United States)

    Xu, Xin; Liu, Chunming; Yang, Simon X; Hu, Dewen

    2011-12-01

    In recent years, approximate policy iteration (API) has attracted increasing attention in reinforcement learning (RL), e.g., least-squares policy iteration (LSPI) and its kernelized version, the kernel-based LSPI algorithm. However, it remains difficult for API algorithms to obtain near-optimal policies for Markov decision processes (MDPs) with large or continuous state spaces. To address this problem, this paper presents a hierarchical API (HAPI) method with binary-tree state space decomposition for RL in a class of absorbing MDPs, which can be formulated as time-optimal learning control tasks. In the proposed method, after collecting samples adaptively in the state space of the original MDP, a learning-based decomposition strategy of sample sets was designed to implement the binary-tree state space decomposition process. Then, API algorithms were used on the sample subsets to approximate local optimal policies of sub-MDPs. The original MDP was decomposed into a binary-tree structure of absorbing sub-MDPs, constructed during the learning process, thus, local near-optimal policies were approximated by API algorithms with reduced complexity and higher precision. Furthermore, because of the improved quality of local policies, the combined global policy performed better than the near-optimal policy obtained by a single API algorithm in the original MDP. Three learning control problems, including path-tracking control of a real mobile robot, were studied to evaluate the performance of the HAPI method. With the same setting for basis function selection and sample collection, the proposed HAPI obtained better near-optimal policies than previous API methods such as LSPI and KLSPI.

  15. APPROXIMATION OF VOLUME AND BRANCH SIZE DISTRIBUTION OF TREES FROM LASER SCANNER DATA

    Directory of Open Access Journals (Sweden)

    P. Raumonen

    2012-09-01

    Full Text Available This paper presents an approach for automatically approximating the above-ground volume and branch size distribution of trees from dense terrestrial laser scanner produced point clouds. The approach is based on the assumption that the point cloud is a sample of a surface in 3D space and the surface is locally like a cylinder. The point cloud is covered with small neighborhoods which conform to the surface. Then the neighborhoods are characterized geometrically and these characterizations are used to classify the points into trunk, branch, and other points. Finally, proper subsets are determined for cylinder fitting using geometric characterizations of the subsets.

  16. Orders of Magnitude Extension of the Effective Dynamic Range of TDC-Based TOFMS Data Through Maximum Likelihood Estimation

    Science.gov (United States)

    Ipsen, Andreas; Ebbels, Timothy M. D.

    2014-10-01

    In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.

  17. Orders of magnitude extension of the effective dynamic range of TDC-based TOFMS data through maximum likelihood estimation.

    Science.gov (United States)

    Ipsen, Andreas; Ebbels, Timothy M D

    2014-10-01

    In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time-tasks that may be best addressed through engineering efforts.

  18. Sequential and parallel subquadratic work algorithms for constructing approximately optimal binary search trees

    Energy Technology Data Exchange (ETDEWEB)

    Karpinski, M. [Univ. of Bonn (Germany); Larmore, L.L. [Univ. of Nevada, Las Vegas, NV (United States); Rytter, W. [Warsaw Univ. (Poland)

    1996-12-31

    A sublinear time subquadratic work parallel algorithm for construction of an optimal binary search tree, in a special case of practical interest, namely where the frequencies of items to be stored are not too small, is given. A sublinear time subquadratic work parallel algorithm for construction of an approximately optimal binary search tree in the general case is also given. Sub-quadratic work and sublinear time are achieved using a fast parallel algorithm for the column minima problem for Monge matrices developed by Atallah and Kosaraju. The algorithms given in this paper take O(n{sup 0.6}) time with n processors in the CREW PRAM model. Our 29orithms work well if every subtree of the optimal binary search tree of depth {Omega}(log n) has o(n) leaves. We prove that there is a sequential algorithm with subquadratic average-case complexity, by demonstrating that the {open_quotes}small subtree{close_quotes} condition holds with very high probability for a randomly permuted weight sequence. This solves the conjecture posed in liand breaks the quadratic time {open_quotes}barrier{close_quotes} of Knuth`s algorithm. This algorithm can also be parallelized to run in average sublinear time with n processors.

  19. A conceptual approach to approximate tree root architecture in infinite slope models

    Science.gov (United States)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  20. 基于最大似然估计的加权质心定位算法%Weighted Centroid Localization Algorithm Based on Maximum Likelihood Estimation

    Institute of Scientific and Technical Information of China (English)

    卢先领; 夏文瑞

    2016-01-01

    In solving the problem of localizing nodes in a wireless sensor network,we propose a weighted centroid localization algorithm based on maximum likelihood estimation,with the specific goal of solving the problems of big received signal strength indication (RSSI)ranging error and low accuracy of the centroid localization algorithm.Firstly,the maximum likelihood estimation between the estimated distance and the actual distance is calculated as weights.Then,a parameter k is introduced to optimize the weights between the anchor nodes and the unknown nodes in the weight model.Finally,the locations of the unknown nodes are calculated and modified by using the proposed algorithm.The simulation results show that the weighted centroid algorithm based on the maximum likelihood estimation has the features of high localization accuracy and low cost,and has better performance compared with the inverse distance-based algorithm and the inverse RSSI-based algo-rithm.Hence,the proposed algorithm is more suitable for the indoor localization of large areas.%为解决无线传感器网络中节点自身定位问题,针对接收信号强度指示(received signal strength indication,RSSI)测距误差大和质心定位算法精度低的问题,提出一种基于最大似然估计的加权质心定位算法。首先通过计算将估计距离与实际距离之间的最大似然估计值作为权值,然后在权值模型中,引进一个参数k优化未知节点周围锚节点分布,最后计算出未知节点的位置并加以修正。仿真结果表明,基于最大似然估计的加权质心算法具有定位精度高和成本低的特点,优于基于距离倒数的质心加权和基于RSSI倒数的质心加权算法,适用于大面积的室内定位。

  1. A Maximum Likelihood Method for Harmonic Impedance Estimation%系统谐波阻抗估计的极大似然估计方法

    Institute of Scientific and Technical Information of China (English)

    华回春; 贾秀芳; 曹东升; 赵成勇

    2014-01-01

    In order to estimate the harmonic impedance more accurately, the complex maximum likelihood estimation method was proposed in the paper. Firstly, the complex multivariate Gaussian random variable was defined by imitating the real multivariate Gaussian random variable definition. According to the meaning of covariance, the calculation formula of the complex covariance was given. Secondly, the probability density function of the complex Gaussian distribution was deduced by the algebra isomorphism theory. Data selection was performed by the statistical theory, and then the complex maximum likelihood estimation function was established for the selected data. Finally, harmonic impedance was estimated by maximizing the complex maximum likelihood estimation function. A case study based on the IEEE 14-bus test system was operated, which shows that the proposed method can give more accurate result compared with the traditional methods.%为更加准确地估计系统谐波阻抗,提出复数域极大似然估计方法。首先,仿照实数域多元正态分布的定义给出复数域多元正态随机变量的定义,根据协方差的含义,定义复协方差的计算公式。然后利用代数同构理论,推导复数域正态分布的概率密度函数,利用统计学理论进行数据筛选,采用筛选后的数据代入复数域极大似然估计函数。最后利用极值理论进行求解,实现系统谐波阻抗的估计。对IEEE 14节点系统进行仿真,结果表明,与传统方法相比,所提方法估计结果更为准确。

  2. A user-operated audiometry method based on the maximum likelihood principle and the two-alternative forced-choice paradigm

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben

    2014-01-01

    response criteria. User-operated audiometry was developed as an alternative to traditional audiometry for research purposes among musicians. Design: Test-retest reliability of the user-operated audiometry system was evaluated and the user-operated audiometry system was compared with traditional audiometry......Objective: To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating....... Study sample: Test-retest reliability of user-operated 2AFC audiometry was tested with 38 naïve listeners. User-operated 2AFC audiometry was compared to traditional audiometry in 41 subjects. Results: The repeatability of user-operated 2AFC audiometry was comparable to traditional audiometry...

  3. A maximum likelihood model for fitting power functions with data uncertainty: A case study on the relationship between body lengths and masses for Sciuridae species worldwide

    Directory of Open Access Journals (Sweden)

    Youhua Chen

    2016-09-01

    Full Text Available In this report, a maximum likelihood model is developed to incorporate data uncertainty in response and explanatory variables when fitting power-law bivariate relationships in ecology and evolution. This simple likelihood model is applied to an empirical data set related to the allometric relationship between body mass and length of Sciuridae species worldwide. The results show that the values of parameters estimated by the proposed likelihood model are substantially different from those fitted by the nonlinear least-of-square (NLOS method. Accordingly, the power-law models fitted by both methods have different curvilinear shapes. These discrepancies are caused by the integration of measurement errors in the proposed likelihood model, in which NLOS method fails to do. Because the current likelihood model and the NLOS method can show different results, the inclusion of measurement errors may offer new insights into the interpretation of scaling or power laws in ecology and evolution.

  4. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.;

    2010-01-01

    Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission...... data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery......, with a cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...

  5. Conditional maximum likelihood identification under missing data%丢失数据下的条件极大似然辨识

    Institute of Scientific and Technical Information of China (English)

    王建宏

    2014-01-01

    针对仿射结构形式在丢失数据下的条件极大似然辨识问题,首先引入交换矩阵将原随机矢量分解成观测和丢失部分;然后确定出观测数据在丢失数据下的条件均值和条件方差,以此建立条件似然函数;进而从理论上给出了条件极大似然函数关于未知参数矢量、未知白噪声方差值和丢失数据的求导公式,并从工程上给出一种可分离的优化算法;最后通过仿真算例验证了该辨识方法的有效性。%To the conditional maximum likelihood identification problem of an affine structure under missing data, a permutation matrix is used to divide a random vector into observed and missing parts. Then conditional mean and covariance under missing data are set up to obtain a conditional likelihood function. In the theory, expressions of the derivatives about the conditional maximum likelihood function on the unknown parameter vector, unknown white noise variance and missing data are derived. A separable optimum algorithm is given to be applied in engineering. Finally, simulation results show the effectiveness of the identification method.

  6. Maximum-likelihood cluster recontruction

    CERN Document Server

    Bartelmann, M; Seitz, S; Schneider, P J; Bartelmann, Matthias; Narayan, Ramesh; Seitz, Stella; Schneider, Peter

    1996-01-01

    We present a novel method to recontruct the mass distribution of galaxy clusters from their gravitational lens effect on background galaxies. The method is based on a least-chisquare fit of the two-dimensional gravitational cluster potential. The method combines information from shear and magnification by the cluster lens and is designed to easily incorporate possible additional information. We describe the technique and demonstrate its feasibility with simulated data. Both the cluster morphology and the total cluster mass are well reproduced.

  7. Joint maximum likelihood and Bayesian channel estimation%联合最大似然贝叶斯信道估计

    Institute of Scientific and Technical Information of China (English)

    沈壁川; 郑建宏; 申敏

    2008-01-01

    在高信噪比情况下统计贝叶斯估计是一种有效的信道估计方法,但是在低信噪比时由于噪声估计不准确,其性能逐渐下降.研究了基于鲁棒的非线性降噪方法,提出了一个简化的联合最大似然贝叶斯信道估计.计算机仿真结果和分析表明这种方法在较高和较低的信噪比情况下,提高了信道估计和联合检测的性能.%Statistical Bayesian channel estimation is effective in suppressing noise floor for high SNR, but its performance degrades due to less reliable noise estimation in low SNR region. Based on a robust nonlinear de-noising technique for small signal, a simplified joint maximum likelihood and Bayesian channel estimation is proposed and investigated. Simulation results are presented and analysis shows it is promising to improve channel estimation and joint detection performance for both low and high SNR situations.

  8. Maximum Likelihood Identification of Nonlinear Model for High-speed Train%高速列车非线性模型的极大似然辨识

    Institute of Scientific and Technical Information of China (English)

    衷路生; 李兵; 龚锦红; 张永贤; 祝振敏

    2014-01-01

    提出高速列车非线性模型的极大似然(Maximum likelihood, ML)辨识方法,适合于高速列车在非高斯噪声干扰下的非线性模型的参数估计.首先,构建了描述高速列车单质点力学行为的随机离散非线性状态空间模型,并将高速列车参数的极大似然(ML)估计问题转化为期望极大(Expectation maximization,EM)的优化问题;然后,给出高速列车状态估计的粒子滤波器和粒子平滑器的设计方法,据此构造列车的条件数学期望,并给出最大化该数学期望的梯度搜索方法,进而得到列车参数的辨识算法,分析了算法的收敛速度;最后,进行了高速列车阻力系数估计的数值对比实验.结果表明,所提出的辨识方法的有效性.

  9. Analysis of Reduction in Area in MIMO Receivers Using SQRD Method and Unitary Transformation with Maximum Likelihood Estimation (MLE and Minimum Mean Square Error Estimation (MMSE Techniques

    Directory of Open Access Journals (Sweden)

    Sabitha Gauni

    2014-03-01

    Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.

  10. Analysis of the Sufficient Path Elimination Window for the Maximum-Likelihood Sequential-Search Decoding Algorithm for Binary Convolutional Codes

    CERN Document Server

    Shieh, Shin-Lin; Han, Yunghsiang S

    2007-01-01

    A common problem on sequential-type decoding is that at the signal-to-noise ratio (SNR) below the one corresponding to the cutoff rate, the average decoding complexity per information bit and the required stack size grow rapidly with the information length. In order to alleviate the problem in the maximum-likelihood sequential decoding algorithm (MLSDA), we propose to directly eliminate the top path whose end node is $\\Delta$-trellis-level prior to the farthest one among all nodes that have been expanded thus far by the sequential search. Following random coding argument, we analyze the early-elimination window $\\Delta$ that results in negligible performance degradation for the MLSDA. Our analytical results indicate that the required early elimination window for negligible performance degradation is just twice of the constraint length for rate one-half convolutional codes. For rate one-third convolutional codes, the required early-elimination window even reduces to the constraint length. The suggestive theore...

  11. Maximum Likelihood DOA Estimator based on Grid Hill Climbing Method%基于网格爬山法的最大似然DOA估计算法

    Institute of Scientific and Technical Information of China (English)

    艾名舜; 马红光

    2011-01-01

    The maximum likelihood estimator for direction of arrival ( DOA) possesses optimum theoretical performance as well as high computational complexity. Taking the estimation as an optimization problem of high-dimension nonlinear function, a novel algorithm has been proposed to reduce the computational load of that. At the beginning, the beamforming method is adopted to estimate the spatial spectrum roughly, and a group of initial solutions that obey the law of the "pre-estimated distribution " are obtained according to the information of the spatial spectrum, and the initial sulotions will locate in the local attractive area of the global optimum solution in great probability. Then, one of the soultions in this group who possesses the maximum fitness is selected to be the initial point of the local search. Grid Hill-climbing Method (GHCM) is a kinds of local search methods that takes a grid as a search unit, which is an improved version of the traditional Hill-climbing Method, and the GHCM is more efficient and stable than the traditional one, so it is a-dopted to obtain the global optimum solution at last. The proposed algorithm can obtain accurate DOA estimation with lower computational cost, and the simulation shows that the propoesd algorithm is more efficient than the maximum likelihood DOA estimator based on PSO .%最大似然波达方向(DOA)估计具有最优的理论性能,但是存在计算量过大的问题.为了降低最大似然DOA估计的计算量,将参数估计转化为高维非线性函数的优化问题,并提出了一种新的优化算法.首先利用波束形成法对空间谱进行预估计并根据空间谱信息构造一组满足“预估分布”的初始解,这组初始解以较大概率落在全局最优解的局部吸引域中.然后将其中适应度最大的一个初始解作为局部搜索的起点.网格爬山法是一种以网格为单元的局部搜索方法,比传统爬山法更加高效和稳定,因此采用该方法获取全局

  12. 基于极大似然估计的TMT三镜轴系装调%TMT third-mirror shafting system alignment based on maximum likelihood estimation

    Institute of Scientific and Technical Information of China (English)

    安其昌; 张景旭; 孙敬伟

    2013-01-01

    In order to complete the testing and alignment of TMT third mirror shafting, the maximum likelihood estimation was introduced. Firstly, two intersecting planes were used to identify a space line. Then, considering the noise of the measured data, maximum likelihood estimation was made use of to estimate TMT third mirror shafting parameters. And in MATLAB, which produced a training set with Gaussian white noise, the angle of collection axis and ideal axis from 6.29" to the optimized 5.24" was reduced, with optimization of 17%. Lastly, Vantage Laser Tracker was made the testing tool for TMT large shafting. Using optimization before, the TMT third mirror shafting residuals error was drawn to 2.9", which was less than the TMT indicator of 4". This paper will do good to TMT third mirror shafting alignment, and raise a real-time method to other large diameter optical system shafting alignment.%为了完成TMT三镜轴系的检测与装调,引入了极大似然估计来完成TMT三镜轴系装调。首先提出利用两过定点的相交拟合平面辨识一条空间直线;之后考虑到测量数据噪声类型的不确定性,提出使用极大似然估计对三镜机械轴位置参数进行辨识,并在MATLAB产生的一组带有高斯白噪声的训练集上对两个拟合平面所过定点位置进行优化,拟合轴线与理想轴线的夹角由优化前的6.29"降低为优化后的5.24",优化量为17%;然后选定Vantage激光跟踪仪作为TMT大型轴系的检验工具,利用之前的优化方案,得出在该方法下TMT三镜轴系的定位残差为2.9",小于TMT招标方提出的指标4"。文中将极大似然线性拟合用于TMT三镜轴系装调,提出了一种实时性强、适用范围广的方法,对于其他大口径光学系统轴系的检测与调节也有很大的借鉴意义。

  13. Accuracy of land use change detection using support vector machine and maximum likelihood techniques for open-cast coal mining areas.

    Science.gov (United States)

    Karan, Shivesh Kishore; Samadder, Sukha Ranjan

    2016-08-01

    One objective of the present study was to evaluate the performance of support vector machine (SVM)-based image classification technique with the maximum likelihood classification (MLC) technique for a rapidly changing landscape of an open-cast mine. The other objective was to assess the change in land use pattern due to coal mining from 2006 to 2016. Assessing the change in land use pattern accurately is important for the development and monitoring of coalfields in conjunction with sustainable development. For the present study, Landsat 5 Thematic Mapper (TM) data of 2006 and Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data of 2016 of a part of Jharia Coalfield, Dhanbad, India, were used. The SVM classification technique provided greater overall classification accuracy when compared to the MLC technique in classifying heterogeneous landscape with limited training dataset. SVM exceeded MLC in handling a difficult challenge of classifying features having near similar reflectance on the mean signature plot, an improvement of over 11 % was observed in classification of built-up area, and an improvement of 24 % was observed in classification of surface water using SVM; similarly, the SVM technique improved the overall land use classification accuracy by almost 6 and 3 % for Landsat 5 and Landsat 8 images, respectively. Results indicated that land degradation increased significantly from 2006 to 2016 in the study area. This study will help in quantifying the changes and can also serve as a basis for further decision support system studies aiding a variety of purposes such as planning and management of mines and environmental impact assessment.

  14. Maximum likelihood estimate of life expectancy in the prehistoric Jomon: Canine pulp volume reduction suggests a longer life expectancy than previously thought.

    Science.gov (United States)

    Sasaki, Tomohiko; Kondo, Osamu

    2016-09-01

    Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years.

  15. Simulation for position determination of distal and proximal edges for SOBP irradiation in hadron therapy by using the maximum likelihood estimation method.

    Science.gov (United States)

    Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro

    2005-12-21

    In radiation therapy with hadron beams, conformal irradiation to a tumour can be achieved by using the properties of incident ions such as the high dose concentration around the Bragg peak. For the effective utilization of such properties, it is necessary to evaluate the volume irradiated with hadron beams and the deposited dose distribution in a patient's body. Several methods have been proposed for this purpose, one of which uses the positron emitters generated through fragmentation reactions between incident ions and target nuclei. In the previous paper, we showed that the maximum likelihood estimation (MLE) method could be applicable to the estimation of beam end-point from the measured positron emitting activity distribution for mono-energetic beam irradiations. In a practical treatment, a spread-out Bragg peak (SOBP) beam is used to achieve a uniform biological dose distribution in the whole target volume. Therefore, in the present paper, we proposed to extend the MLE method to estimations of the position of the distal and proximal edges of the SOBP from the detected annihilation gamma ray distribution. We confirmed the effectiveness of the method by means of simulations. Although polyethylene was adopted as a substitute for a soft tissue target in validating the method, the proposed method is equally applicable to general cases, provided that the reaction cross sections between the incident ions and the target nuclei are known. The relative advantage of incident beam species to determine the position of the distal and the proximal edges was compared. Furthermore, we ascertained the validity of applying the MLE method to determinations of the position of the distal and the proximal edges of an SOBP by simulations and we gave a physical explanation of the distal and the proximal information.

  16. Maximum Likelihood Factor Analysis of the Effects of Chronic Centrifugation on the Structural Development of the Musculoskeletal System of the Rat

    Science.gov (United States)

    Amtmann, E.; Kimura, T.; Oyama, J.; Doden, E.; Potulski, M.

    1979-01-01

    At the age of 30 days female Sprague-Dawley rats were placed on a 3.66 m radius centrifuge and subsequently exposed almost continuously for 810 days to either 2.76 or 4.15 G. An age-matched control group of rats was raised near the centrifuge facility at earth gravity. Three further control groups of rats were obtained from the animal colony and sacrificed at the age of 34, 72 and 102 days. A total of 16 variables were simultaneously factor analyzed by maximum-likelihood extraction routine and the factor loadings presented after-rotation to simple structure by a varimax rotation routine. The variables include the G-load, age, body mass, femoral length and cross-sectional area, inner and outer radii, density and strength at the mid-length of the femur, dry weight of gluteus medius, semimenbranosus and triceps surae muscles. Factor analyses on A) all controls, B) all controls and the 2.76 G group, and C) all controls and centrifuged animals, produced highly similar loading structures of three common factors which accounted for 74%, 68% and 68%. respectively, of the total variance. The 3 factors were interpreted as: 1. An age and size factor which stimulates the growth in length and diameter and increases the density and strength of the femur. This factor is positively correlated with G-load but is also active in the control animals living at earth gravity. 2. A growth inhibition factor which acts on body size, femoral length and on both the outer and inner radius at mid-length of the femur. This factor is intensified by centrifugation.

  17. A gradient Markov chain Monte Carlo algorithm for computing multivariate maximum likelihood estimates and posterior distributions: mixture dose-response assessment.

    Science.gov (United States)

    Li, Ruochen; Englehardt, James D; Li, Xiaoguang

    2012-02-01

    Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.

  18. Inference of Gene Flow in the Process of Speciation: An Efficient Maximum-Likelihood Method for the Isolation-with-Initial-Migration Model

    Science.gov (United States)

    Costa, Rui J.; Wilkinson-Herbots, Hilde

    2017-01-01

    The isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, it has been reported that the parameter estimates obtained by fitting the IM model are very sensitive to the model’s assumptions—including the assumption of constant gene flow until the present. This article is concerned with the isolation-with-initial-migration (IIM) model, which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used, by means of likelihood-ratio tests, to distinguish between alternative models representing the following divergence scenarios: (a) divergence with potentially asymmetric gene flow until the present, (b) divergence with potentially asymmetric gene flow until some point in the past and in isolation since then, and (c) divergence in complete isolation. We illustrate the procedure on pairs of Drosophila sequences from ∼30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this article. PMID:28193727

  19. Maximum Likelihood TOA Estimation Algorithm Based on Multi-carrier Time-frequency Iteration%基于多载波时频迭代的最大似然TOA估计算法

    Institute of Scientific and Technical Information of China (English)

    程刘胜

    2015-01-01

    在合理布局井下无线网络基站的基础上,提出了一种基于多载波时频迭代的最大似然TOA(Time of Arrival)估计算法,通过将小数延时不断迭代来缩小估计误差,确定合适搜索步长,实现对信号的精确TOA估计。仿真结果表明:时频迭代的最大似然TOA估计算法具有更快的收敛速度;在信噪比较小时,采用时频迭代的最大似然TOA估计算法比经典TOA估计算法有效地提高了估计精度。%The influence of underground multipath, non-line of sight and the network time synchronization accuracy cause that delayed arrival time estimation deviation is bigger in the mining UWB high accuracy position system. This paper proposes a maximum likelihood TOA estimation algorithm based on multi-carrier time-frequency iteration by rationally distributing the underground wireless base stations to conform a suitable searching step length and find the exact TOA approximation estimation to the signal via fractional delay iterated to narrow the estimation error. The result shows that the time frequency iteration TOA estimation has a faster rate of convergence than the non-iteration algorithm.

  20. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D [Case Western Reserve Univ., Cleveland, OH (United States)

    2004-05-01

    of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  1. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  2. 基于蚁群算法的最大似然方位估计快速算法%Fast maximum likelihood direction-of-arrival estimator based on ant colony optimization

    Institute of Scientific and Technical Information of China (English)

    焦亚萌; 黄建国; 侯云山

    2011-01-01

    针对最大似然(maximum likelihood,ML)方位估计方法多维非线性搜索计算量大的问题,将连续空间蚁群算法与最大似然算法相结合,提出基于蚁群算法的最大似然(ant colony optimization based maximum likelihood,ACOML)估计新方法.该方法将传统蚁群算法中的信息量留存过程拓展为连续空间的信息量高斯核概率密度函数,得到最大似然方位估计的非线性全局最优解.仿真结果表明,ACOML方法保持了原最大似然方位估计方法算法的优良估计性能,而计算量只是最大似然方法的1/15.%A new maximum likelihood direction of arrival (DOA) estimator based on ant colony optimization (ACOML) is proposed to reduce the computational complexity of multi-dimensional nonlinear existing in maximum likelihood (ML) DOA estimator. By extending the pheromone remaining process in the traditional ant colony optimization into a pheromone Gaussian kernel probability distribution function in continuous space, ant colony optimization is combined with maximum likelihood method to lighten computation burden. The simulations show that ACOML provides a similar performance to that achieved by the original ML method, but its computational cost is only 1/15 of ML.

  3. Uniform Approximate Estimation for Nonlinear Nonhomogenous Stochastic System with Unknown Parameter

    OpenAIRE

    2012-01-01

    The error bound in probability between the approximate maximum likelihood estimator (AMLE) and the continuous maximum likelihood estimator (MLE) is investigated for nonlinear nonhomogenous stochastic system with unknown parameter. The rates of convergence of the approximations for Itô and ordinary integral are introduced under some regular assumptions. Based on these results, the in probability rate of convergence of the approximate log-likelihood function to the true continuous log-likelihoo...

  4. Estimating Effective Elastic Thickness on Venus from Gravity and Topography: Robust Results from Multi-taper and Maximum-Likelihood Analysis

    Science.gov (United States)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.

    2012-12-01

    Venus has undergone a markedly different evolution than Earth. Its tectonics do not resemble the plate-tectonic system observed on Earth, and many surface features—such as tesserae and coronae—lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere. Lithospheric parameters such as the effective elastic thickness have previously been estimated from the correlation between topography and gravity anomalies, either in the space domain or the spectral domain (where admittance or coherence functions are estimated). Correlation and spectral analyses that have been obtained on Venus have been limited by geometry (typically, only rectangular or circular data windows were used), and most have lacked robust error estimates. There are two levels of error: the first being how well the correlation, admittance or coherence can be estimated; the second and most important, how well the lithospheric elastic thickness can be estimated from those. The first type of error is well understood, via classical analyses of resolution, bias and variance in multivariate spectral analysis. Understanding this error leads to constructive approaches of performing the spectral analysis, via multi-taper methods (which reduce variance) with well-chosen optimal tapers (to reduce bias). The second type of error requires a complete analysis of the coupled system of differential equations that describes how certain inputs (the unobservable initial loading by topography at various interfaces) are being mapped to the output (final, measurable topography and gravity anomalies). The equations of flexure have one unknown: the flexural rigidity or effective elastic thickness—the parameter of interest. Fortunately, we have recently come to a full understanding of this second type of error, and derived a maximum-likelihood estimation (MLE) method that results in unbiased and minimum-variance estimates of the flexural rigidity under a variety of initial

  5. 离散化发现过程模型的极大似然估计与贝叶斯估计之对比%Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  6. A Comparison Between the Empirical Logistic Regression Method and the Maximum Likelihood Estimation Method%经验 logistic 回归方法与最大似然估计方法的对比分析

    Institute of Scientific and Technical Information of China (English)

    张婷婷; 高金玲

    2014-01-01

    针对logistic回归中最大似然估计法的迭代算法求解困难的问题,从理论和实例运用的两个角度寻找到一种简便估计法,即经验logistic回归。分析结果表明,在样本容量很大的情况下经验logistic回归方法比最大似然估计方法更具备良好的科学性和实用性,并且两种方法对同一组资料的分析结果一致,而经验logistic回归更简单,此结果对于实际工作者来说非常重要。%In this paper , the empirical logistic regression method and the maximum likelihood estimation method were analyzed in detail by illustrating in theory , and the two methods were compared with correlation a-nalysis from scientific and practical .Analysis results show that , under the condition of the sample size is very big , empirical logistic regression method is better than maximum likelihood estimation method in respect of scientific and practical , at the same time , they are the same consequence .However , empirical logistic regression method is easier than maximum likelihood estimation method , which is very important to practical workers .

  7. Approximate group context tree: applications to dynamic programming and dynamic choice models

    CERN Document Server

    Belloni, Alexandre

    2011-01-01

    The paper considers a variable length Markov chain model associated with a group of stationary processes that share the same context tree but potentially different conditional probabilities. We propose a new model selection and estimation method, develop oracle inequalities and model selection properties for the estimator. These results also provide conditions under which the use of the group structure can lead to improvements in the overall estimation. Our work is also motivated by two methodological applications: discrete stochastic dynamic programming and dynamic discrete choice models. We analyze the uniform estimation of the value function for dynamic programming and the uniform estimation of average dynamic marginal effects for dynamic discrete choice models accounting for possible imperfect model selection. We also derive the typical behavior of our estimator when applied to polynomially $\\beta$-mixing stochastic processes. For parametric models, we derive uniform rate of convergence for the estimation...

  8. MODERATE DEVIATION OF MAXIMUM LIKELIHOOD ESTIMATORS FOR TRUNCATED AND CENSORED DATA%截断与删失数据的极大似然估计的中偏差

    Institute of Scientific and Technical Information of China (English)

    肖枝洪; 朱强

    2009-01-01

    本文研究了截断与删失模型,运用Taylor渐近展开方法,得到模型的极大似然估计的中偏差,比渐近正态性结果更加精细.%In this paper, we study a kind of truncated and censored data. It is shown that the maximum likelihood estimator of unknown parameter θ obeys the moderate deviation under certain regular conditions by Taylor asymptotic expansion. We obtain their accurate expression of rate function.

  9. Classifier Design Given an Uncertainty Class of Feature Distributions via Regularized Maximum Likelihood and the Incorporation of Biological Pathway Knowledge in Steady-State Phenotype Classification.

    Science.gov (United States)

    Esfahani, Mohammad Shahrokh; Knight, Jason; Zollanvari, Amin; Yoon, Byung-Jun; Dougherty, Edward R

    2013-10-01

    Contemporary high-throughput technologies provide measurements of very large numbers of variables but often with very small sample sizes. This paper proposes an optimization-based paradigm for utilizing prior knowledge to design better performing classifiers when sample sizes are limited. We derive approximate expressions for the first and second moments of the true error rate of the proposed classifier under the assumption of two widely-used models for the uncertainty classes; ε-contamination and p-point classes. The applicability of the approximate expressions is discussed by defining the problem of finding optimal regularization parameters through minimizing the expected true error. Simulation results using the Zipf model show that the proposed paradigm yields improved classifiers that outperform traditional classifiers that use only training data. Our application of interest involves discrete gene regulatory networks possessing labeled steady-state distributions. Given prior operational knowledge of the process, our goal is to build a classifier that can accurately label future observations obtained in the steady state by utilizing both the available prior knowledge and the training data. We examine the proposed paradigm on networks containing NF-κB pathways, where it shows significant improvement in classifier performance over the classical data-only approach to classifier design. Companion website: http://gsp.tamu.edu/Publications/supplementary/shahrokh12a.

  10. 多变量受控自回归滑动平均系统的极大似然辨识方法%Maximum likelihood identification method for a multivariable controlled autoregressive moving average system

    Institute of Scientific and Technical Information of China (English)

    高艳普; 王向东; 王冬青

    2015-01-01

    An algorithm of maximum likelihood method for parameters estimate was presented aimed at multivariable controlled autoregressive moving average (CARMA-like).The algorithm transform the CARMA-like system into m identification models (m is the output numbers),each of which only had a parameter vector which needed to be esti-mated,and then through maximum likelihood method for estimating parameter vectors of each identification model,and all parameters estimate of the system were obtained.Simulation results verified the effectiveness of the proposed algo-rithm.%提出了一种针对多变量受控自回归滑动平均(controlled autoregressive moving average system-like,CARMA-like)系统的极大似然参数估计算法。将 CARMA-like 系统分解成为 m 个辨识模型(m 是输出量的个数),使每一个辨识模型仅包含一个需要估计的参数向量,通过极大似然方法估计每个辨识模型的参数向量,从而得到整个系统的参数估计值。仿真结果验证了该算法的有效性。

  11. Weak Consistency and Convergence Rate of Quasi -Maximum Likelihood Estimated in Generalized Linear Models%广义线性模型中拟似然估计的弱相合性及收敛速度

    Institute of Scientific and Technical Information of China (English)

    邓春亮; 胡南辉

    2012-01-01

    在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性,并得到其收敛于真值βo的速度为Op(λn^-1/2),其中λn(λ^-n)为方阵Sn=n∑i=1XiX^11的最小(最大)特征值.%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els (GLMs). Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp(λn^-1/2),λn(^λn) which denotes the smallest (Maximum)eigervalue of the matrixSn =n∑i=1XiX^11,

  12. Chinese Word Segmentation Cognitive Model Based on Maximum Likelihood Optimization EM Algorithm%极大似然优化EM算法的汉语分词认知模型

    Institute of Scientific and Technical Information of China (English)

    赵越; 李红

    2016-01-01

    针对标准EM算法在汉语分词的应用中还存在收敛性能不好、分词准确性不高的问题,本文提出了一种基于极大似然估计规则优化EM算法的汉语分词认知模型,首先使用当前词的概率值计算每个可能切分的可能性,对切分可能性进行“归一化”处理,并对每种切分进行词计数,然后针对标准EM算法得到的估计值只能保证收敛到似然函数的一个稳定点,并不能使其保证收敛到全局最大值点或者局部最大值点的问题,采用极大似然估计规则对其进行优化,从而可以使用非线性最优化中的有效方法进行求解达到加速收敛的目的。仿真试验结果表明,本文提出的基于极大似然估计规则优化EM算法的汉语分词认知模型收敛性能更好,且在汉语分词的精确性较高。%In view of bad convergence and inaccurate word segmentation of standard EM algorithm in Chinese words segmentation, this paper put forward a cognitive model based on optimized EM algorithm by maximum likelihood estimation rule. Firstly, it uses the probability of current word to calculate the possibility of each possible segmentation and normalize them. Each segmentation is counted by words. Standard EM algorithm cannot make sure converging to a stable point of likelihood function, and converging to a global or local maximum point. Therefore, the maximum likelihood estimation rule is adopted to optimize it so as to use an effective method in nonlinear optimization and accelerate the convergence. the simulation experiments show that the optimized EM algorithm by maximum likelihood estimation rule has better convergence performance in the Chinese words cognitive model and more accurate in the words segmentation.

  13. Trees

    Science.gov (United States)

    Al-Khaja, Nawal

    2007-01-01

    This is a thematic lesson plan for young learners about palm trees and the importance of taking care of them. The two part lesson teaches listening, reading and speaking skills. The lesson includes parts of a tree; the modal auxiliary, can; dialogues and a role play activity.

  14. Hash Dijkstra Algorithm for Approximate Minimal Spanning Tree%近似最小树的哈希Dijkstra算法

    Institute of Scientific and Technical Information of China (English)

    李玉鑑; 李厚君

    2011-01-01

    为了解决Dijkstra(DK)算法对大规模数据构造最小树时效率不高的问题,结合局部敏感哈希映射(LSH),针对欧氏空间中的样本,提出了一种近似最小树的快速生成算法,即LSHDK算法.该算法通过减少查找近邻点的计算量提高运行速度.计算实验结果表明,当数据规模大于50000个点时,LSHDK算法比DK算法速度更快且所计算的近似最小树在维数较低时误差非常小(0.00—0.05%),在维数较高时误差通常为0.1%~3.0%.%In order to overcome the low efficiency of Dijkstra (DK) algorithm in constructing Minimal Spanning Trees (MST) for large-scale datasets, this paper uses Locality Sensitive Hashing (LSH) to design a fast approximate algorithm, namely, LSHDK algorithm, to build MST in Euclidean space. The LSHDK algorithm can achieve a faster speed with small error by reducing computations in search of nearest points. Computational experiments show that it runs faster than the DK algorithm on datasets of more than 50 000 points, while the resulting approximate MST has an small error which is very small (0.00 - 0.05% ) in low dimension, and generally between 0. 1% -3.0% in high dimension.

  15. 指数多项式模型中参数最大似然估计的收敛速度%Convergence rate for maximum likelihood estimation of parameters in exponential polynomial model

    Institute of Scientific and Technical Information of China (English)

    房祥忠; 陈家鼎

    2011-01-01

    强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.

  16. Consistency and Asymptotic Normality of the Maximum Likelihood Estimator in Exponential Family Nonlinear Models%指数族非线性模型最大似然估计的相合性和渐近正态性

    Institute of Scientific and Technical Information of China (English)

    夏天; 孔繁超

    2008-01-01

    本文我们提出了一些正则条件,这些条件减弱了Zhu and Wei(1997)文的条件.基于所提的正则条件,我们证明了指数族非线性模型参数最大似然估计的相合性和渐近正态性.我们的结果可被认为是Zhu and Wei(1997)工作的进一步改进.%This paper proposes some regularity conditions which weaken those given by Zhu & Wei (1997).On the basis of the proposed regularity conditions,the existence,the strong consistency and the asymptotic normality of maximum likelihood estimation(MLE)are proved in exponential family nonlinear models(EFNMs).Our results may be regarded as a further improvement of the work of Zhu & Wei(1997).

  17. Systematic Identification of Two-compartment Model based on the Maximum Likelihood Method%基于极大似然法的二房室模型系统辨识

    Institute of Scientific and Technical Information of China (English)

    张应云; 张榆锋; 王勇; 李敬敬; 施心陵

    2014-01-01

    A approach according to the Maximum Likelihood method was presented in this paper to identify the parameters of the Two-compartment Model.To verify the performance of this method, the estimation parameters of the Two-compartment Model ob-tained from it and their absolute errors were compared with those obtained from the methods based on recursive augmented least -squares algorithm.It could be seen that the accuracy and feasibility of the identification-parameters of the nonlinear two-compart-ment model received by Maximum Likelihood method were obviously better than those from the recursive augmented least-squares method.So those parameters with smaller deviations can be used in correlative clinical trial to improve the practicability of the nonlinear two-compartment model.%提出一种基于极大似然法的二房室模型参数辨识方法。为验证本方法的有效性,我们比较了基于极大似然法和递推增广最小二乘法估计得到的常用二房室模型的参数值及其绝对误差。结果表明,基于极大似然法的非线性二房室模型参数辨识准确性和可行性明显优于递推增广最小二乘法。通过极大似然法获得的较小误差的非线性二房室模型参数估计值可用于相关临床试验,有助于提高建立非线性二房室模型的实用性。

  18. Maximum Likelihood Combining of Stochastic Maps

    Science.gov (United States)

    2011-09-01

    M. Csobra, “A solution to the simultaneous localisation and mapping (SLAM) problem,” IEEE Transactions on Robotics and Automation, Vol. 17, No. 3, pp... IEEE Transactions on Robotics and Automation, Vol. 17, No. 6, pp. 890–897, 2001. [7] Y. Bar-Shalom and T. Fortman, Tracking and data association

  19. Maximum Likelihood Learning of Conditional MTE Distributions

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE specifications and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables....... Finally, experimental results demonstrate the applicability of the learning procedure as well as the expressive power of the conditional MTE distribution....

  20. Maximum Likelihood Program for Sequential Testing Documentation

    Science.gov (United States)

    1983-03-01

    Research Laboratory AREA 6 WORK UNIT NUMBERS ,ATITN: DRDAR-BLB Aberdeen Proving Ground. MD 21005 RDT&E 1L162618AH80 It. CONTROLLING OFFICE No,,4E...Availability Codes ist~ Special,-----vail and/or Jo I. INTRODUCTION The Army has used sensitivity testing for many years, especially in the areas of...response distribucion when the data do not meet the requirements for the DiDonato and Jarnagin procedure. Examples are provided for each of these

  1. Maximum likelihood estimation for social network dynamics

    NARCIS (Netherlands)

    Snijders, T.A.B.; Koskinen, J.; Schweinberger, M.

    2010-01-01

    A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The m

  2. Approximate Entropy as a measure of complexity in sap flow temporal dynamics of two tropical tree species under water deficit

    Directory of Open Access Journals (Sweden)

    Gustavo M. Souza

    2004-09-01

    Full Text Available Approximate Entropy (ApEn, a model-independent statistics to quantify serial irregularities, was used to evaluate changes in sap flow temporal dynamics of two tropical species of trees subjected to water deficit. Water deficit induced a decrease in sap flow of G. ulmifolia, whereas C. legalis held stable their sap flow levels. Slight increases in time series complexity were observed in both species under drought condition. This study showed that ApEn could be used as a helpful tool to assess slight changes in temporal dynamics of physiological data, and to uncover some patterns of plant physiological responses to environmental stimuli.Entropia Aproximada (ApEn, um modelo estatístico independente para quantificar irregularidade em séries temporais, foi utilizada para avaliar alterações na dinâmica temporal do fluxo de seiva em duas espécies arbóreas tropicais submetidas à deficiência hídrica. A deficiência hídrica induziu uma grande redução no fluxo de seiva em G. ulmifolia, enquanto que na espécie C. legalis manteve-se estável. A complexidade das séries temporais foi levemente aumentada sob deficiência hídrica. O estudo mostrou que ApEn pode ser usada como um método para detectar pequenas alterações na dinâmica temporal de dados fisiológicos, e revelar alguns padrões de respostas fisiológicas a estímulos ambientais.

  3. An Adaptive UKF Algorithm Based on Maximum Likelihood Principle and Expectation Maximization Algorithm%基于极大似然准则和最大期望算法的自适应UKF算法

    Institute of Scientific and Technical Information of China (English)

    王璐; 李光春; 乔相伟; 王兆龙; 马涛

    2012-01-01

    In order to solve the state estimation problem of nonlinear systems without knowing prior noise statistical characteristics, an adaptive unscented Kalman filter (UKF) based on the maximum likelihood principle and expectation maximization algorithm is proposed in this paper. In our algorithm, the maximum likelihood principle is used to find a log likelihood function with noise statistical characteristics. Then, the problem of noise estimation turns out to be maximizing the mean of the log likelihood function, which can be achieved by using the expectation maximization algorithm. Finally, the adaptive UKF algorithm with a suboptimal and recurred noise statistical estimator can be obtained. The simulation analysis shows that the proposed adaptive UKF algorithm can overcome the problem of filtering accuracy declination of traditional UKF used in nonlinear filtering without knowing prior noise statistical characteristics and that the algorithm can estimate the noise statistical parameters online.%针对噪声先验统计特性未知情况下的非线性系统状态估计问题,提出了基于极大似然准则和最大期望算法的自适应无迹卡尔曼滤波(Unscented Kalman filter,UKF)算法.利用极大似然准则构造含有噪声统计特性的对数似然函数,通过最大期望算法将噪声估计问题转化为对数似然函数数学期望极大化问题,最终得到带次优递推噪声统计估计器的自适应UKF算法.仿真分析表明,与传统UKF算法相比,提出的自适应UKF算法有效克服了传统UKF算法在系统噪声统计特性未知情况下滤波精度下降的问题,并实现了系统噪声统计特性的在线估计.

  4. Trees

    OpenAIRE

    Henri Epstein

    2016-01-01

    An algebraic formalism, developed with V. Glaser and R. Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large cate...

  5. Trees

    OpenAIRE

    Epstein, Henri

    2016-01-01

    An algebraic formalism, developped with V. Glaser and R. Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large cat...

  6. Trees

    CERN Document Server

    Epstein, Henri

    2016-01-01

    An algebraic formalism, developped with V.~Glaser and R.~Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large category of space-times.

  7. 基于最大似然估计和DSP技术的相位补偿算法%Phase Compensation Algorithm Based on Maximum Likelihood Estimation and DSP Technology

    Institute of Scientific and Technical Information of China (English)

    铁维昊; 王文利; 路灿

    2012-01-01

    针对低压电力线非线性信道特性及不同步因素对于载波信号相位的影响,笔者提出了一种基于最大似然估计的相位补偿算法.首先分析了同步问题对于误码率的影响.其次,介绍了相位补偿算法的原理及关键技术的分析.最后在DSP中实现了相位补偿算法的程序设计,并在实际的低压信道中进行测试.测试结果表明:该算法可以有效地解决数字通信中信道的非线性引起的信号失真问题.%To investigate the influence of nonlinearity and asynchronization of low-voltage power line channel on phase of carrier signal,a phase compensation algorithm based on maximum likelihood estimation is proposed. First the influence of synchronization on BER is analyzed, then the principle of the phase compensation algorithm and key technology are introduced. Finally,program of the phase compensation algorithm is implemented in DSP, and is tested in actual low-voltage channel. The result shows that the proposed algorithm can eliminate signal distortion caused by nonlinearity of channel in digital communication.

  8. 广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models

    Institute of Scientific and Technical Information of China (English)

    张戈; 吴黎军

    2013-01-01

    研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.

  9. A robust conditional approximation of marginal tail probabilities.

    OpenAIRE

    Brazzale, A. R.; Ventura, L.

    2001-01-01

    The aim of this contribution is to derive a robust approximate conditional procedure used to eliminate nuisance parameters in regression and scale models. Unlike the approximations to exact conditional solutions based on the likelihood function and on the maximum likelihood estimator, the robust conditional approximation of marginal tail probabilities does not suffer from lack of robustness to model misspecification. To assess the performance of the proposed robust conditional procedure the r...

  10. Measurement of absolute concentrations of individual compounds in metabolite mixtures by gradient-selective time-zero 1H-13C HSQC with two concentration references and fast maximum likelihood reconstruction analysis.

    Science.gov (United States)

    Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L

    2011-12-15

    Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.

  11. Secante hiperbólica generalizada y un método de estimación de sus parámetros: máxima verosimilitud modificada Generalized Secant Hyperbolic and a Method of Estimate of its Parameters: Maximum Likelihood Modified

    Directory of Open Access Journals (Sweden)

    Luis Alejandro Másmela Caita

    2013-11-01

    Full Text Available Diversas distribuciones generalizadas se desarrollan en la literatura estadística, entre ellas se encuentra la distribución Secante Hiperbólica Generalizada (SHG. En este documento se presenta un método alternativo para la estimación de los parámetros poblacionales de la SHG, llamado Máxima Verosimilitud Modificada (MVM. Asumiendo algunas expresiones alternas que difieren con el trabajo de Vaughan en el 2002 y basándose en el mismo conjunto de datos de la fuente original. Se implementa computacionalmente el método transformado de MVM, permitiendo observar unas buenas aproximaciones de los valores de los parámetros de localización y escala, presentados por Vaughan en su artículo. Con esto se pretende que en la práctica se cuente con una metodología diferente para estimar.Different generalized distributions are developed in the statistical literature, among them it is the generalized secant hyperbolic distribution (SHG. This paper presents an alternative method for estimation the population parameters of the SHG, called Modified Maximum Likelihood (MVM. Assuming some alternate expressions that are different from Vaughan´s work in 2002, and based on the same set of data from the original source. It is implemented, the transformed method MVM is implemented computationally, it allows us to observe good approximations of the exact values of the parameters of location and scale, presented by Vaughan in his article. The aim is that in the practice you can use a different methodology to estimate.

  12. Weak GPS signal C/N0 estimation algorithm based on maximum likelihood method%基于最大似然法的GP S弱信号载噪比估计算法

    Institute of Scientific and Technical Information of China (English)

    文力; 谢跃雷; 纪元法; 孙希延

    2014-01-01

    为了在弱信号环境下准确估计卫星信号载噪比,提出一种可自适应调整估计时间,基于最大似然准则的载噪比估计算法。在分析 GPS信号相关器模型输出的基础上,对该算法的原理和性能进行了理论分析,研究了相干累加次数对该算法的影响,并在仿真平台上进行验证。仿真结果与理论推导吻合,在信号很弱时可通过提高累加次数对载噪比进行准确估计。相对传统载噪比估计算法,该算法估计时间较短,估值准确。根据理论推导求出满足精度要求的最小累加次数,用于自适应调整估计更新时间,可提高算法的灵活性。%In order to estimate the carrier to noise ratio under the weak signal environment,an algorithm based on the maxi-mum likelihood criterion has been proposed which can change the update time adaptively.On the basis of GPS correlator output model,the algorithm performance is analyzed theoretically,the coherent accumulation times impact on the accuracy of the estimation.The simulation results agree with the theoretical derivation,which verify that the accuracy can be assured by increasing accumulation times under the noise environment.Compared with the traditional carrier to noise ratio estima-tion algorithm,the method consumes shorter time with good accuracy.Also the minimum cumulative number to meet accu-racy requirements can increase the flexibility of the algorithm by adj usting estimation update time adaptively.

  13. The Distribution Model of CNC System to Failure Based on Maximum Likelihood Estimation%基于极大似然估计的数控系统故障分布模型

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    威布尔分布被广泛用于可靠性工程和寿命数据的分析中。针对两参数威布尔分布,建立基于极大似然法的参数估计模型,采用二阶收敛 Newton-Raphson 迭代法求解威布尔分布的尺寸参数和形状参数。迭代求解过程中,利用 Matlab 图形,初步选取似然函数曲线在零值点附近的区域作为初始值的区间,并根据 Newton-Raphson 迭代法收敛的充分条件进一步确定迭代初值的选取范围。通过 matlab 绘制迭代趋势三维图,证明与迭代计算结果相符。通过比较,证实本参数估计模型和 Newton-Raphson 迭代求解法更加精确有效。%Owing to the fact that the Weibull distribution is frequently applied for reliability engineering and lifespan data analysis,the paper established the parameter estimation model using maximum likelihood estimation for the dual-parametric Weibull distribution.it then used second-order convergent Newton-Raphson iteration method to solve the MLE of two-parameter Weibull distribution,which has scale param-eter and shape parameter.In the iteration process,the area around the zero point of likelihood function curve was preliminarily selected as the range of the initial value based on the likelihood function image which was plotted by Matlab,and according to the sufficient conditions for the convergence of Newton-Raphson iteration method to further determine the scope of the iterative initial value.Iteration trend three-dimensional image which was plotted by Matlab proves to be consistent with the results of iterative calcula-tion.Finally,by comparison,this parameter estimation model and the Newton-Raphson iterative solution method were proved to be more accurate and efficient.

  14. Maximum likelihood registration for passive sensors of multiple airborne platforms in WGS-84%WGS-84坐标系下多空基无源传感器最大似然配准

    Institute of Scientific and Technical Information of China (English)

    吴卫华; 江晶

    2015-01-01

    相比多运动平台有源传感器配准或异质传感器配准问题,多平台无源传感器的配准由于无距离信息将更为复杂,鲜有相关研究。为此,首先构建了 WGS-84坐标系下有偏无源观测模型,然后将最大似然配准(maxi-mum likelihood registration,MLR)算法扩展到空基多运动平台无源传感器的配准。运用复合函数求导链式法则,推导出应用 MLR 算法时至为关键的传感器观测量对目标状态的雅克比矩阵。为计算该矩阵,研究了 WGS-84坐标系下两平台利用仅角度观测对目标的无源定位问题。理论和仿真结果表明该方法可实现无源传感器配准,配准误差逼近其 Cramer-Rao 界,验证了该方法的有效性。%Compared with those registration problems of active sensors or dissimilar sensors on multiple moving airborne platforms,the registration problem for passive sensors on different airborne platforms will be-come more complex owing to missing range measurement,and there is little relative literature.Thus,firstly, the biased measurement model for passive sensors based on world geodetic system-84 (WGS-84)coordinate sys-tem is constructed,and then the maximum likelihood registration (MLR)algorithm is extended to passive sen-sor registration for multiple moving airborne platforms in WGS-84 coordinate system.Using the chain derivative rule of composite function,when MLR is applied the key Jacobi matrix of sensor measurements to target state is derived.In order to compute the matrix,passive location problem for a target in WGS-84 by two airborne plat-forms with angle-only measurements is investigated.Theory analysis and simulation results show that the meth-od can realize passive sensor registration,and the registration errors can approach the Cramer-Rao low bound, those indicate the validation of the algorithm.

  15. Maximum Likelihood Estimation Based Algorithm for Tracking Cooperative Target%一种基于最大似然估计的合作目标多维参数跟踪算法

    Institute of Scientific and Technical Information of China (English)

    魏子翔; 崔嵬; 李霖; 吴爽; 吴嗣亮

    2015-01-01

    The scheme which is based on the Digital Delay Locked Loop (DDLL), Frequency Locked Loop (FLL), and Phase Locked Loop (PLL) is implemented in the microwave radar for spatial rendezvous and docking, and the delay, frequency and Direction Of Arrival (DOA) estimations of the incident direct-sequence spread spectrum signal transmitted by cooperative target are obtained. Yet the DDLL, FLL, and PLL (DFP) based scheme has not made full use of the received signal. For this reason, a novel Maximum Likelihood Estimation (MLE) Based Tracking (MLBT) algorithm with a low computational burden is proposed. The feature that the gradients of cost function are proportional to parameter errors is employed to design discriminators of parameter errors. Then three tracking loops are set up to provide the parameter estimations. In the following section, the variance characteristics of discriminators are investigated, and the low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are given for the MLBT algorithm. Finally, the simulations and computational efficiency analysis are provided. The low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are verified. Additionally, it is also shown that the MLBT algorithm achieves better performances in terms of estimators accuracy than those of the DFP based scheme with a limited increase in computational burden.%空间交会对接微波雷达采用基于延迟锁定环(DDLL)、锁频环(FLL)和锁相环(PLL)的算法处理合作目标转发的直接序列扩频信号,获得入射信号的时延、频率及波达角(DOA)估计.针对当前基于DDLL, FLL和PLL(DFP)的算法没有充分利用接收信号有效信息的问题,该文提出一种基于极大似然估计(MLE)的低代价闭环跟踪(MLBT)算法.该算法利用代价函数的梯度正比于参数误差的特性,设计了参数误差鉴别器.在此基础上给出了相应的扩频信号多参数跟踪环路.分析并验证了鉴别器的方差特性,

  16. QR-code Recognition Method in Super-resolution Image Synthesis Based on Maximum Likelihood%基于最大似然法的超分辨率合成的 QR 条码识别方法

    Institute of Scientific and Technical Information of China (English)

    梁华刚; 程加乐; 孙小喃

    2015-01-01

    With the advantage of big storing capacity in small space ,strong fault tolerance ,high decoding reliability , the QR‐code has a wide application in the areas of circulation and logistics .However ,in the actual identification ,due to the limitation of different factors ,such as the low resolution of the barcode captured by camera ,there are also many problems and difficulties with the identification work .A novel low resolution QR‐code recognition method based on super‐resolution image processing technology is presented in this paper .The simple equipment such as the mobile phone is used to shoot the barcode video with a lower resolution ,through nonlinear fitting on each frame by a maximum likelihood algorithm .Then su‐per‐resolution barcode image is synthesized through the binary feature of QR‐codes to improve the identification accuracy of the low resolution barcode video .The experiment shows that this method can recognize the QR‐code which the traditional method couldn't and the accurate recognition rate in the low‐resolution barcode for 55 × 55 pixels is above 85% .The average recognition accuracy is improved by 10% .%QR 条码具有小存储空间、大容量、容错能力强和译码可靠性高等优点,在流通和物流等领域被广泛应用。但是在实际识别时,由于受拍摄条码分辨率低等因素制约,存在识别困难的问题。论文提出一种基于超分辨率图像处理技术的低分辨率 QR 码识别方法,对手机等简易设备拍摄的低分辨率条码视频,采用最大似然算法对各帧图像进行非线性拟合,然后通过 QR 码的二值特性合成超分辨率条码图像,可以提高低分辨率条码视频的识别准确率。实验证明,该方法能解决传统方法不能识别的 QR 条码,使像素为55×55的低分辨率条码识别成功率达到85%以上,平均识别准确率提高10%。

  17. Divergence date estimation and a comprehensive molecular tree of extant cetaceans.

    Science.gov (United States)

    McGowen, Michael R; Spaulding, Michelle; Gatesy, John

    2009-12-01

    Cetaceans are remarkable among mammals for their numerous adaptations to an entirely aquatic existence, yet many aspects of their phylogeny remain unresolved. Here we merged 37 new sequences from the nuclear genes RAG1 and PRM1 with most published molecular data for the group (45 nuclear loci, transposons, mitochondrial genomes), and generated a supermatrix consisting of 42,335 characters. The great majority of these data have never been combined. Model-based analyses of the supermatrix produced a solid, consistent phylogenetic hypothesis for 87 cetacean species. Bayesian analyses corroborated odontocete (toothed whale) monophyly, stabilized basal odontocete relationships, and completely resolved branching events within Mysticeti (baleen whales) as well as the problematic speciose clade Delphinidae (oceanic dolphins). Only limited conflicts relative to maximum likelihood results were recorded, and discrepancies found in parsimony trees were very weakly supported. We utilized the Bayesian supermatrix tree to estimate divergence dates among lineages using relaxed-clock methods. Divergence estimates revealed rapid branching of basal odontocete lineages near the Eocene-Oligocene boundary, the antiquity of river dolphin lineages, a Late Miocene radiation of balaenopteroid mysticetes, and a recent rapid radiation of Delphinidae beginning approximately 10 million years ago. Our comprehensive, time-calibrated tree provides a powerful evolutionary tool for broad-scale comparative studies of Cetacea.

  18. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  19. Random Tree-Puzzle leads to the Yule-Harding distribution.

    Science.gov (United States)

    Vinh, Le Sy; Fuehrer, Andrea; von Haeseler, Arndt

    2011-02-01

    Approaches to reconstruct phylogenies abound and are widely used in the study of molecular evolution. Partially through extensive simulations, we are beginning to understand the potential pitfalls as well as the advantages of different methods. However, little work has been done on possible biases introduced by the methods if the input data are random and do not carry any phylogenetic signal. Although Tree-Puzzle (Strimmer K, von Haeseler A. 1996. Quartet puzzling: a quartet maximum-likelihood method for reconstructing tree topologies. Mol Biol Evol. 13:964-969; Schmidt HA, Strimmer K, Vingron M, von Haeseler A. 2002. Tree-Puzzle: maximum likelihood phylogenetic analysis using quartets and parallel computing. Bioinformatics 18:502-504) has become common in phylogenetics, the resulting distribution of labeled unrooted bifurcating trees when data do not carry any phylogenetic signal has not been investigated. Our note shows that the distribution converges to the well-known Yule-Harding distribution. However, the bias of the Yule-Harding distribution will be diminished by a tiny amount of phylogenetic information. maximum likelihood, phylogenetic reconstruction, Tree-Puzzle, tree distribution, Yule-Harding distribution.

  20. 正态总体均值与标准差比在简单半序约束下的最大似然估计%Maximum Likelihood Estimation of Ratios of Means and Standard Deviations from Normal Populations with Different Sample Numbers under Semi-Order Restriction

    Institute of Scientific and Technical Information of China (English)

    史海芳; 李树有; 姬永刚

    2008-01-01

    For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.

  1. Effectiveness of phylogenomic data and coalescent species-tree methods for resolving difficult nodes in the phylogeny of advanced snakes (Serpentes: Caenophidia).

    Science.gov (United States)

    Pyron, R Alexander; Hendry, Catriona R; Chou, Vincent M; Lemmon, Emily M; Lemmon, Alan R; Burbrink, Frank T

    2014-12-01

    Next-generation genomic sequencing promises to quickly and cheaply resolve remaining contentious nodes in the Tree of Life, and facilitates species-tree estimation while taking into account stochastic genealogical discordance among loci. Recent methods for estimating species trees bypass full likelihood-based estimates of the multi-species coalescent, and approximate the true species-tree using simpler summary metrics. These methods converge on the true species-tree with sufficient genomic sampling, even in the anomaly zone. However, no studies have yet evaluated their efficacy on a large-scale phylogenomic dataset, and compared them to previous concatenation strategies. Here, we generate such a dataset for Caenophidian snakes, a group with >2500 species that contains several rapid radiations that were poorly resolved with fewer loci. We generate sequence data for 333 single-copy nuclear loci with ∼100% coverage (∼0% missing data) for 31 major lineages. We estimate phylogenies using neighbor joining, maximum parsimony, maximum likelihood, and three summary species-tree approaches (NJst, STAR, and MP-EST). All methods yield similar resolution and support for most nodes. However, not all methods support monophyly of Caenophidia, with Acrochordidae placed as the sister taxon to Pythonidae in some analyses. Thus, phylogenomic species-tree estimation may occasionally disagree with well-supported relationships from concatenated analyses of small numbers of nuclear or mitochondrial genes, a consideration for future studies. In contrast for at least two diverse, rapid radiations (Lamprophiidae and Colubridae), phylogenomic data and species-tree inference do little to improve resolution and support. Thus, certain nodes may lack strong signal, and larger datasets and more sophisticated analyses may still fail to resolve them.

  2. Robustness to divergence time underestimation when inferring species trees from estimated gene trees.

    Science.gov (United States)

    DeGiorgio, Michael; Degnan, James H

    2014-01-01

    To infer species trees from gene trees estimated from phylogenomic data sets, tractable methods are needed that can handle dozens to hundreds of loci. We examine several computationally efficient approaches-MP-EST, STAR, STEAC, STELLS, and STEM-for inferring species trees from gene trees estimated using maximum likelihood (ML) and Bayesian approaches. Among the methods examined, we found that topology-based methods often performed better using ML gene trees and methods employing coalescent times typically performed better using Bayesian gene trees, with MP-EST, STAR, STEAC, and STELLS outperforming STEM under most conditions. We examine why the STEM tree (also called GLASS or Maximum Tree) is less accurate on estimated gene trees by comparing estimated and true coalescence times, performing species tree inference using simulations, and analyzing a great ape data set keeping track of false positive and false negative rates for inferred clades. We find that although true coalescence times are more ancient than speciation times under the multispecies coalescent model, estimated coalescence times are often more recent than speciation times. This underestimation can lead to increased bias and lack of resolution with increased sampling (either alleles or loci) when gene trees are estimated with ML. The problem appears to be less severe using Bayesian gene-tree estimates.

  3. Decision tree approach for classification of remotely sensed satellite data using open source support

    Indian Academy of Sciences (India)

    Richa Sharma; Aniruddha Ghosh; P K Joshi

    2013-10-01

    In this study, an attempt has been made to develop a decision tree classification (DTC) algorithm for classification of remotely sensed satellite data (Landsat TM) using open source support. The decision tree is constructed by recursively partitioning the spectral distribution of the training dataset using WEKA, open source data mining software. The classified image is compared with the image classified using classical ISODATA clustering and Maximum Likelihood Classifier (MLC) algorithms. Classification result based on DTC method provided better visual depiction than results produced by ISODATA clustering or by MLC algorithms. The overall accuracy was found to be 90% (kappa = 0.88) using the DTC, 76.67% (kappa = 0.72) using the Maximum Likelihood and 57.5% (kappa = 0.49) using ISODATA clustering method. Based on the overall accuracy and kappa statistics, DTC was found to be more preferred classification approach than others.

  4. Approximate Bayes Estimators of the Logistic Distribution Parameters Based on Progressive Type-II Censoring Scheme

    Directory of Open Access Journals (Sweden)

    Mohamed Mahmoud Mohamed

    2016-09-01

    Full Text Available In this paper we develop approximate Bayes estimators of the parameters,reliability, and hazard rate functions of the Logistic distribution by using Lindley’sapproximation, based on progressively type-II censoring samples. Noninformativeprior distributions are used for the parameters. Quadratic, linexand general Entropy loss functions are used. The statistical performances of theBayes estimates relative to quadratic, linex and general entropy loss functionsare compared to those of the maximum likelihood based on simulation study.

  5. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    Science.gov (United States)

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  6. Marginal maximum likelihood estimation in polytomous Rasch models using SAS

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Olsbjerg, Maja

    2013-01-01

    Dansk radiohistorie er på mange måder et uskrevet kapitel. Selvom der findes flere udgivelser fra Statsradiofonien selv og en dansk mediehistorie i fire bind, henligger mange centrale problemstillinger endnu i historiens mørke. Det vil nærværende afhandling søge at råde bod på med et fokus på tid...

  7. A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul

    1999-01-01

    Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers.The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminalmovement can optimize cell...

  8. A Rayleigh Doppler frequency estimator derived from maximum likelihood theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul

    1999-01-01

    Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers. The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminal movement can optimize cell...

  9. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    The aspect of correlation among the blood velocities in time and space has not received much attention in previous blood velocity estimators. The theory of fluid mechanics predicts this property of the blood flow. Additionally, most estimators based on a cross-correlation analysis are limited...... of simulated and in vivo data from the carotid artery. The estimator is meant for two-dimensional (2-D) color flow imaging. The resulting mathematical relation for the estimator consists of two terms. The first term performs a cross-correlation analysis on the signal segment in the radio frequency (RF......)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...

  10. Maximum likelihood estimation for life distributions with competing failure modes

    Science.gov (United States)

    Sidik, S. M.

    1979-01-01

    The general model for the competing failure modes assuming that location parameters for each mode are expressible as linear functions of the stress variables and the failure modes act independently is presented. The general form of the likelihood function and the likelihood equations are derived for the extreme value distributions, and solving these equations using nonlinear least squares techniques provides an estimate of the asymptotic covariance matrix of the estimators. Monte-Carlo results indicate that, under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slightly biased, and the asymptotic covariances are rapidly approached.

  11. Penalized maximum likelihood estimation for generalized linear point processes

    OpenAIRE

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we...

  12. Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.

    Science.gov (United States)

    1977-02-01

    maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to

  13. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...

  14. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    Science.gov (United States)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  15. 估计三种常用疲劳应力-寿命模型P-S-N曲线的 统一经典极大似然法%A Unified Classical Maximum Likelihood Approach for Estimating P-S-N Curves of Three Commonly Used Fatigue Stress-Life Relations

    Institute of Scientific and Technical Information of China (English)

    赵永翔; 王金诺; 高庆

    2001-01-01

    拓展经典极大似然法到Langer模型,提出了估计三参数、Langer和Basquin三种常用疲劳应力-寿命模型P-S-N曲线及其置信限的统一方法。方法用于处理极大似然法疲劳试验得到的S-N数据。该试验在特别关注的参考载荷试验一组试样,其余试样在不同载荷下试验。以参考载荷试验数据的统计参量为基础,按照每个模型中材料常数协同处于相同概率水平原则,将曲线表示为对数疲劳寿命均值和均方差线的广义形式,至多4个材料常数。曲线中的材料常数按极大似然原理采用数学规划法求出。45#碳钢缺口试样(kt=20)对称循环加载试验数据的分析说明了方法的有效性。分析还揭示合理模型有必要通过比较拟合效果、预计误差和应用安全性来确定。三参数模型的拟合效果最好,Langer模型稍差,Basquin模型较差。从拟合效果、预计误差和应用安全性角度,Basquin模型不适于描述该套数据。此外,经典极大似然法估计结果可能因受局部统计参量影响而给出非安全估计,有必要发展改进的可以最大限度减小这种影响的方法。%A unified classical maximum likelihood approach for estimating P-S-N curves of the three commonly used fatigue stress-life relations, namely three parameter, Langer and Basquin, is presented by extrapolating the classical maximum likelihood method to the Langer relation. This approach is applied to deal with the S-N data obtained from a so-called maximum likelihood method-fatigue test. In the test, a group of specimens are tested at a so-called reference load, which is specially taken care of by practice, and residual specimens are individually fatigued at different loads. The approach takes a basis of the local statistical parameters of the logarithms of fatigue lives at the reference load. According to an assumption that the material constants in each relation are concurrently in

  16. Monotone Boolean approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.

  17. Algorithms, data structures, and numerics for likelihood-based phylogenetic inference of huge trees

    Directory of Open Access Journals (Sweden)

    Izquierdo-Carrasco Fernando

    2011-12-01

    Full Text Available Abstract Background The rapid accumulation of molecular sequence data, driven by novel wet-lab sequencing technologies, poses new challenges for large-scale maximum likelihood-based phylogenetic analyses on trees with more than 30,000 taxa and several genes. The three main computational challenges are: numerical stability, the scalability of search algorithms, and the high memory requirements for computing the likelihood. Results We introduce methods for solving these three key problems and provide respective proof-of-concept implementations in RAxML. The mechanisms presented here are not RAxML-specific and can thus be applied to any likelihood-based (Bayesian or maximum likelihood tree inference program. We develop a new search strategy that can reduce the time required for tree inferences by more than 50% while yielding equally good trees (in the statistical sense for well-chosen starting trees. We present an adaptation of the Subtree Equality Vector technique for phylogenomic datasets with missing data (already available in RAxML v728 that can reduce execution times and memory requirements by up to 50%. Finally, we discuss issues pertaining to the numerical stability of the Γ model of rate heterogeneity on very large trees and argue in favor of rate heterogeneity models that use a single rate or rate category for each site to resolve these problems. Conclusions We address three major issues pertaining to large scale tree reconstruction under maximum likelihood and propose respective solutions. Respective proof-of-concept/production-level implementations of our ideas are made available as open-source code.

  18. Modeling disease vector occurrence when detection is imperfect: infestation of Amazonian palm trees by triatomine bugs at three spatial scales.

    Directory of Open Access Journals (Sweden)

    Fernando Abad-Franch

    Full Text Available BACKGROUND: Failure to detect a disease agent or vector where it actually occurs constitutes a serious drawback in epidemiology. In the pervasive situation where no sampling technique is perfect, the explicit analytical treatment of detection failure becomes a key step in the estimation of epidemiological parameters. We illustrate this approach with a study of Attalea palm tree infestation by Rhodnius spp. (Triatominae, the most important vectors of Chagas disease (CD in northern South America. METHODOLOGY/PRINCIPAL FINDINGS: The probability of detecting triatomines in infested palms is estimated by repeatedly sampling each palm. This knowledge is used to derive an unbiased estimate of the biologically relevant probability of palm infestation. We combine maximum-likelihood analysis and information-theoretic model selection to test the relationships between environmental covariates and infestation of 298 Amazonian palm trees over three spatial scales: region within Amazonia, landscape, and individual palm. Palm infestation estimates are high (40-60% across regions, and well above the observed infestation rate (24%. Detection probability is higher ( approximately 0.55 on average in the richest-soil region than elsewhere ( approximately 0.08. Infestation estimates are similar in forest and rural areas, but lower in urban landscapes. Finally, individual palm covariates (accumulated organic matter and stem height explain most of infestation rate variation. CONCLUSIONS/SIGNIFICANCE: Individual palm attributes appear as key drivers of infestation, suggesting that CD surveillance must incorporate local-scale knowledge and that peridomestic palm tree management might help lower transmission risk. Vector populations are probably denser in rich-soil sub-regions, where CD prevalence tends to be higher; this suggests a target for research on broad-scale risk mapping. Landscape-scale effects indicate that palm triatomine populations can endure deforestation

  19. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  20. Genetic distances and phylogenetic trees of different Awassi sheep populations based on DNA sequencing.

    Science.gov (United States)

    Al-Atiyat, R M; Aljumaah, R S

    2014-01-01

    This study aimed to estimate evolutionary distances and to reconstruct phylogeny trees between different Awassi sheep populations. Thirty-two sheep individuals from three different geographical areas of Jordan and the Kingdom of Saudi Arabia (KSA) were randomly sampled. DNA was extracted from the tissue samples and sequenced using the T7 promoter universal primer. Different phylogenetic trees were reconstructed from 0.64-kb DNA sequences using the MEGA software with the best general time reverse distance model. Three methods of distance estimation were then used. The maximum composite likelihood test was considered for reconstructing maximum likelihood, neighbor-joining and UPGMA trees. The maximum likelihood tree indicated three major clusters separated by cytosine (C) and thymine (T). The greatest distance was shown between the South sheep and North sheep. On the other hand, the KSA sheep as an outgroup showed shorter evolutionary distance to the North sheep population than to the others. The neighbor-joining and UPGMA trees showed quite reliable clusters of evolutionary differentiation of Jordan sheep populations from the Saudi population. The overall results support geographical information and ecological types of the sheep populations studied. Summing up, the resulting phylogeny trees may contribute to the limited information about the genetic relatedness and phylogeny of Awassi sheep in nearby Arab countries.

  1. 多普勒频率变化率快速最大似然估计辅助的高动态载波跟踪环路%Carrier Tracking Loop in High Dynamic Environment Aided by Fast Maximum Likelihood Estimation of Doppler Frequency Rate-of-change

    Institute of Scientific and Technical Information of China (English)

    郇浩; 陶选如; 陶然; 程小康; 董朝; 李鹏飞

    2014-01-01

    To reach a compromise between efficient dynamic performance and high tracking accuracy of carrier tracking loop in high-dynamic circumstance which results in large Doppler frequency and Doppler frequency rate-of-change, a fast maximum likelihood estimation method of Doppler frequency rate-of-change is proposed in this paper, and the estimation value is utilized to aid the carrier tracking loop. First, it is pointed out that the maximum likelihood estimation method of Doppler frequency and Doppler frequency rate-of-change is equivalent to the Fractional Fourier Fransform (FrFT). Second, the estimation method of Doppler frequency rate-of-change, which combines the instant self-correlation and the segmental Discrete Fourier Transform (DFT) is proposed to solve the large two-dimensional search calculation amount of the Doppler frequency and Doppler frequency rate-of-change, and the received coarse estimation value is applied to narrow down the search range. Finally, the estimation value is used in the carrier tracking loop to reduce the dynamic stress and improve the tracking accuracy. Theoretical analysis and computer simulation show that the search calculation amount falls to 5.25 percent of the original amount with Signal to Noise Ratio (SNR)-30 dB, and the Root Mean Sguare Error(RMSE) of frequency tracked is only 8.46 Hz/s, compared with the traditional carrier tracking method the tracking sensitivity can be improved more than 3 dB.%高动态环境下接收信号含有较大的多普勒频率及其变化率,传统载波跟踪方法难以在高动态应力和跟踪精度两方面取得较好折中,针对这一问题该文提出一种多普勒频率变化率快速最大似然估计方法,并利用估计值辅助载波跟踪环路。首先指出了多普勒频率及其变化率的最大似然估计可等效采用分数阶傅里叶变换(FrFT)来实现;其次,针对频率及其变化率2维搜索运算量大的问题,提出一种瞬时自相关与分段离

  2. 基于辅助阵元的方位依赖幅相误差最大似然自校正:针对确定信号模型%Maximum likelihood self-calibration for direction-dependent gain-phase errors with carry-on instrumental sensors:case of deterministic signal model

    Institute of Scientific and Technical Information of China (English)

    王鼎; 潘苗; 吴瑛

    2011-01-01

    Aim at the self-calibration of direction-dependent gm-phase errors in case of deterministic signal model, the maximum likelihood method(MLM) for calibrating the direction-dependent gain-phase errors with carry-on instrumental sensors was presented. In order to maximize the high-dimensional nonlinear cost function appearing in the MLM, an improved alternative projection iteration algorithm, which could optimize the azimuths and direc6on-dependent gain-phase errors was proposed. The closed-form expressions of the Cramér-Rao bound(CRB) for azimuths and gain-phase errors were derived. Simulation experiments show the effectiveness and advantage of the novel method.%针对确定信号模型条件下方位依赖幅相误差的自校正问题,给出了一种基于辅助阵元的方位依赖幅相误差最大似然自校正方法;针对最大似然估计器中出现的高维非线性优化问题,推导了一种改进型交替投影迭代算法,从而实现了信号方位和方位依赖幅相误差的优化计算.此外,还推导了信号方位和方位依赖幅相误差的无偏克拉美罗界(CRB).仿真实验结果验证了新方法的有效性和优越性.

  3. Linear Time Approximation Schemes for the Gale-Berlekamp Game and Related Minimization Problems

    CERN Document Server

    Karpinski, Marek

    2008-01-01

    We design a linear time approximation scheme for the Gale-Berlekamp Switching Game and generalize it to a wider class of dense fragile minimization problems including the Nearest Codeword Problem (NCP) and Unique Games Problem. Further applications include, among other things, finding a constrained form of matrix rigidity and maximum likelihood decoding of an error correcting code. As another application of our method we give the first linear time approximation schemes for correlation clustering with a fixed number of clusters and its hierarchical generalization. Our results depend on a new technique for dealing with small objective function values of optimization problems and could be of independent interest.

  4. Making Tree Ensembles Interpretable

    OpenAIRE

    Hara, Satoshi; Hayashi, Kohei

    2016-01-01

    Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited. In this paper, we propose a post processing method that improves the model interpretability of tree ensembles. After learning a complex tree ensembles in a standard way, we approximate it by a simpler model that is interpretable for human. To obtain the simpler model, we derive the EM algorithm minimizing the KL divergence from the ...

  5. Approximated mutual information training for speech recognition using myoelectric signals.

    Science.gov (United States)

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  6. Study on discrete wavelet packet modulation based on pilot signal and maximum likelihood estimation algorithm%基于导频信号和最大似然估计算法的离散小波包调制的研究

    Institute of Scientific and Technical Information of China (English)

    高建中; 孟立凡; 史万莉

    2009-01-01

    目前,紧张的无线频谱资源已经使频谱利用率成为无线通信系统面临的重要问题.多载波调制技术能有效解决上述问题并以其高效的频谱利用率和良好的抗多径衰落性能成为4G的核心传输技术之一.本文提出一种基于导频信号和最大似然估计算法的离散小波包多载波调制系统.通过对导频信号的设计,获得信道状态信息,利用最大似然估计算出最优信道状态信息,最后利用基于迫零算法消除由多径衰落引起的码间干扰.通过理论分析和仿真验证,基于导频信号和最大似然估计的算法的离散小波包调制是值得考虑的多载波调制技术.%Recently,the efficiency of frequency spectrum is very important for wireless communication system because of the lack of frequency spectrum resource.Multicarrier modulation can availably solve the above-mentioned problem.For its high efficiency of frequency spectrum and effectively eliminating the inter-symbol interference(ISI)resulted from the multipath fading channel,multicarrier modulation because one of the important technologies of the fourth generation wireless communication.Novel DWPM(Discrete Wavelet Packet Modulation)system based on pilot signal MLEA(Maximum Likelihood Estimation Algorithm).Channel state information(CSI)is obtained with the design of the pilot and the optimum CSI is estimated by MLEA,then inter-symbol interference is suppressed by zero forcing algorithm.It is demonstrated by theoretical analysis and simulation results that DWPM system base on pilot signal MLEA is considerable multicarrier modulation.

  7. Maximum Likelihood TDOA-FDOA Estimator Using Markov Chain Monte Carlo Sampling%基于马尔科夫键蒙特卡洛抽样的最大似然时差-频差联合估计算法

    Institute of Scientific and Technical Information of China (English)

    赵拥军; 赵勇胜; 赵闯

    2016-01-01

    This paper investigates the joint estimation of Time Difference Of Arrival (TDOA) and Frequency Difference Of Arrival (FDOA) in passive location system, where the true value of the reference signal is unknown. A novel Maximum Likelihood (ML) estimator of TDOA and FDOA is constructed, and Markov Chain Monte Carlo (MCMC) method is applied to finding the global maximum of likelihood function by generating the realizations of TDOA and FDOA. Unlike the Cross Ambiguity Function (CAF) algorithm or the Expectation Maximization (EM) algorithm, the proposed algorithm can also estimate the TDOA and FDOA of non-integer multiple of the sampling interval and has no dependence on the initial estimate. The Cramer Rao Lower Bound (CRLB) is also derived. Simulation results show that, the proposed algorithm outperforms the CAF and EM algorithm for different SNR conditions with higher accuracy and lower computational complexity.%该文针对无源定位中参考信号真实值未知的时差-频差联合估计问题,构建了一种新的时差-频差最大似然估计模型,并采用马尔科夫链蒙特卡洛(MCMC)方法求解似然函数的全局极大值,得到时差-频差联合估计.算法通过生成时差-频差样本,并统计样本均值得到估计值,克服了传统互模糊函数(CAF)算法只能得到时域和频域采样间隔整数倍估计值的问题,且不存在期望最大化(EM)等迭代算法的初值依赖和收敛问题.推导了时差-频差联合估计的克拉美罗界,并通过仿真实验表明,算法在不同信噪比条件下的估计精度优于CAF算法和EM算法,且计算复杂度较低.

  8. Comparison between artificial neural networks and maximum likelihood classification in digital soil mapping Comparação entre redes neurais artificiais e classificação por máxima verossimilhança no mapeamento digital de solos

    Directory of Open Access Journals (Sweden)

    César da Silva Chagas

    2013-04-01

    Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.O levantamento de solos é a principal fonte de informação espacial sobre solos para diferentes usos

  9. 一个适于最大似然法估计物种分化年代的进化速率平滑近似方法%A heuristic rate smoothing procedure for maximum likelihood estmation of species divergence times

    Institute of Scientific and Technical Information of China (English)

    Ziheng YANG

    2004-01-01

    众所周知,物种分化年代的估计对分子钟(进化速率恒定)假定很敏感.另一方面,在远缘物种(例如哺乳纲不同目的动物)的比较中,分子钟几乎总是不成立的.这样在估计分化时间时考虑不同进化区系的速率差异至为重要.最大似然法可以很自然地考虑这种速率差异,并且可以同时分析多个基因位点的资料以及同时利用多重化石校正数据.以前提出的似然法需要研究者将进化树的树枝按速率分组,本文提出一个近似方法以使这个过程自动化.本方法综合了以前的似然法、贝斯法及近似速率平滑法的一些特征.此外,还对算法加以改进,以适应综合数据分析时某些基因在某些物种中缺乏资料的情形.应用新提出的方法来分析马达加斯加的倭狐猴的分化年代,并与以前的似然法及贝斯法的分析进行了比较[动物学报50(4):645-656,2004].%Estimation of species divergence times is well-known to be sensitive to violation of the molecular clock assumption (rate constancy over time). However, the molecular clock is almost always violated in comparisons of distantly related species, such as different orders of mammals. Thus it is important to take into account different rates among lineages when divergence times are estimated. The maximum likelihood method provides a framework for accommodating rate variation and can naturally accommodate heterogeneous datasets from multiple loci and fossil calibrations at multiple nodes.Previous implementations of the likelihood method require the researcher to assign branches to different rate classes. In this paper, I implement a heuristic rate-smoothing algorithm (the AHRS algorithm) to automate the assignment of branches to rate groups. The method combines features of previous likelihood, Bayesian and rate-smoothing methods. The likelihood algorithm is also improved to accommodate missing sequences at some loci in the combined analysis. The new

  10. morePhyML: improving the phylogenetic tree space exploration with PhyML 3.

    Science.gov (United States)

    Criscuolo, Alexis

    2011-12-01

    PhyML is a widely used Maximum Likelihood (ML) phylogenetic tree inference software based on a standard hill-climbing method. Starting from an initial tree, the version 3 of PhyML explores the tree space by using "Nearest Neighbor Interchange" (NNI) or "Subtree Pruning and Regrafting" (SPR) tree swapping techniques in order to find the ML phylogenetic tree. NNI-based local searches are fast but can often get trapped in local optima, whereas it is expected that the larger (but slower to cover) SPR-based neighborhoods will lead to trees with higher likelihood. Here, I verify that PhyML infers more likely trees with SPRs than with NNIs in almost all cases. However, I also show that the SPR-based local search of PhyML often does not succeed at locating the ML tree. To improve the tree space exploration, I deliver a script, named morePhyML, which allows escaping from local optima by performing character reweighting. This ML tree search strategy, named ratchet, often leads to higher likelihood estimates. Based on the analysis of a large number of amino acid and nucleotide data, I show that morePhyML allows inferring more accurate phylogenetic trees than several other recently developed ML tree inference softwares in many cases.

  11. Building phylogenetic trees from molecular data with MEGA.

    Science.gov (United States)

    Hall, Barry G

    2013-05-01

    Phylogenetic analysis is sometimes regarded as being an intimidating, complex process that requires expertise and years of experience. In fact, it is a fairly straightforward process that can be learned quickly and applied effectively. This Protocol describes the several steps required to produce a phylogenetic tree from molecular data for novices. In the example illustrated here, the program MEGA is used to implement all those steps, thereby eliminating the need to learn several programs, and to deal with multiple file formats from one step to another (Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S. 2011. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol. 28:2731-2739). The first step, identification of a set of homologous sequences and downloading those sequences, is implemented by MEGA's own browser built on top of the Google Chrome toolkit. For the second step, alignment of those sequences, MEGA offers two different algorithms: ClustalW and MUSCLE. For the third step, construction of a phylogenetic tree from the aligned sequences, MEGA offers many different methods. Here we illustrate the maximum likelihood method, beginning with MEGA's Models feature, which permits selecting the most suitable substitution model. Finally, MEGA provides a powerful and flexible interface for the final step, actually drawing the tree for publication. Here a step-by-step protocol is presented in sufficient detail to allow a novice to start with a sequence of interest and to build a publication-quality tree illustrating the evolution of an appropriate set of homologs of that sequence. MEGA is available for use on PCs and Macs from www.megasoftware.net.

  12. Approximate Representations and Approximate Homomorphisms

    CERN Document Server

    Moore, Cristopher

    2010-01-01

    Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities in terms of the ratio d / d_min where d_min is the dimension of the smallest nontrivial representation of G. As an application, we bound the extent to which a function f : G -> H can be an approximate homomorphism where H is another finite group. We show that if H's representations are significantly smaller than G's, no such f can be much more homomorphic than a random function. We interpret these results as showing that if G is quasirandom, that is, if d_min is large, then G cannot be embedded in a small number of dimensi...

  13. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  14. Error-tolerant Tree Matching

    CERN Document Server

    Oflazer, K

    1996-01-01

    This paper presents an efficient algorithm for retrieving from a database of trees, all trees that match a given query tree approximately, that is, within a certain error tolerance. It has natural language processing applications in searching for matches in example-based translation systems, and retrieval from lexical databases containing entries of complex feature structures. The algorithm has been implemented on SparcStations, and for large randomly generated synthetic tree databases (some having tens of thousands of trees) it can associatively search for trees with a small error, in a matter of tenths of a second to few seconds.

  15. bcgTree: automatized phylogenetic tree building from bacterial core genomes.

    Science.gov (United States)

    Ankenbrand, Markus J; Keller, Alexander

    2016-10-01

    The need for multi-gene analyses in scientific fields such as phylogenetics and DNA barcoding has increased in recent years. In particular, these approaches are increasingly important for differentiating bacterial species, where reliance on the standard 16S rDNA marker can result in poor resolution. Additionally, the assembly of bacterial genomes has become a standard task due to advances in next-generation sequencing technologies. We created a bioinformatic pipeline, bcgTree, which uses assembled bacterial genomes either from databases or own sequencing results from the user to reconstruct their phylogenetic history. The pipeline automatically extracts 107 essential single-copy core genes, found in a majority of bacteria, using hidden Markov models and performs a partitioned maximum-likelihood analysis. Here, we describe the workflow of bcgTree and, as a proof-of-concept, its usefulness in resolving the phylogeny of 293 publically available bacterial strains of the genus Lactobacillus. We also evaluate its performance in both low- and high-level taxonomy test sets. The tool is freely available at github ( https://github.com/iimog/bcgTree ) and our institutional homepage ( http://www.dna-analytics.biozentrum.uni-wuerzburg.de ).

  16. Uncertain-tree: discriminating among competing approaches to the phylogenetic analysis of phenotype data

    Science.gov (United States)

    Tanner, Alastair R.; Fleming, James F.; Tarver, James E.; Pisani, Davide

    2017-01-01

    Morphological data provide the only means of classifying the majority of life's history, but the choice between competing phylogenetic methods for the analysis of morphology is unclear. Traditionally, parsimony methods have been favoured but recent studies have shown that these approaches are less accurate than the Bayesian implementation of the Mk model. Here we expand on these findings in several ways: we assess the impact of tree shape and maximum-likelihood estimation using the Mk model, as well as analysing data composed of both binary and multistate characters. We find that all methods struggle to correctly resolve deep clades within asymmetric trees, and when analysing small character matrices. The Bayesian Mk model is the most accurate method for estimating topology, but with lower resolution than other methods. Equal weights parsimony is more accurate than implied weights parsimony, and maximum-likelihood estimation using the Mk model is the least accurate method. We conclude that the Bayesian implementation of the Mk model should be the default method for phylogenetic estimation from phenotype datasets, and we explore the implications of our simulations in reanalysing several empirical morphological character matrices. A consequence of our finding is that high levels of resolution or the ability to classify species or groups with much confidence should not be expected when using small datasets. It is now necessary to depart from the traditional parsimony paradigms of constructing character matrices, towards datasets constructed explicitly for Bayesian methods. PMID:28077778

  17. Reinforcement Learning via AIXI Approximation

    CERN Document Server

    Veness, Joel; Hutter, Marcus; Silver, David

    2010-01-01

    This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains.

  18. Topics in Metric Approximation

    Science.gov (United States)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  19. A best-first tree-searching approach for ML decoding in MIMO system

    KAUST Repository

    Shen, Chung-An

    2012-07-28

    In MIMO communication systems maximum-likelihood (ML) decoding can be formulated as a tree-searching problem. This paper presents a tree-searching approach that combines the features of classical depth-first and breadth-first approaches to achieve close to ML performance while minimizing the number of visited nodes. A detailed outline of the algorithm is given, including the required storage. The effects of storage size on BER performance and complexity in terms of search space are also studied. Our result demonstrates that with a proper choice of storage size the proposed method visits 40% fewer nodes than a sphere decoding algorithm at signal to noise ratio (SNR) = 20dB and by an order of magnitude at 0 dB SNR.

  20. Enumerating Trees

    CERN Document Server

    Kucharczyk, Robert A

    2012-01-01

    In this note we discuss trees similar to the Calkin-Wilf tree, a binary tree that enumerates all positive rational numbers in a simple way. The original construction of Calkin and Wilf is reformulated in a more algebraic language, and an elementary application of methods from analytic number theory gives restrictions on possible analogues.

  1. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  2. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast n...

  3. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast navig...

  4. IDENTIFICATION AND MAPPING OF TREE SPECIES IN URBAN AREAS USING WORLDVIEW-2 IMAGERY

    Directory of Open Access Journals (Sweden)

    Y. T. Mustafa

    2015-10-01

    Full Text Available Monitoring and mapping of urban trees are essential to provide urban forestry authorities with timely and consistent information. Modern techniques increasingly facilitate these tasks, but require the development of semi-automatic tree detection and classification methods. In this article, we propose an approach to delineate and map the crown of 15 tree species in the city of Duhok, Kurdistan Region of Iraq using WorldView-2 (WV-2 imagery. A tree crown object is identified first and is subsequently delineated as an image object (IO using vegetation indices and texture measurements. Next, three classification methods: Maximum Likelihood, Neural Network, and Support Vector Machine were used to classify IOs using selected IO features. The best results are obtained with Support Vector Machine classification that gives the best map of urban tree species in Duhok. The overall accuracy was between 60.93% to 88.92% and κ-coefficient was between 0.57 to 0.75. We conclude that fifteen tree species were identified and mapped at a satisfactory accuracy in urban areas of this study.

  5. Context trees

    OpenAIRE

    Ganzinger, Harald; Nieuwenhuis, Robert; Nivela, Pilar

    2001-01-01

    Indexing data structures are well-known to be crucial for the efficiency of the current state-of-the-art theorem provers. Examples are \\emph{discrimination trees}, which are like tries where terms are seen as strings and common prefixes are shared, and \\emph{substitution trees}, where terms keep their tree structure and all common \\emph{contexts} can be shared. Here we describe a new indexing data structure, \\emph{context trees}, where, by means of a limited kind of conte...

  6. Two Trees

    OpenAIRE

    Cochrane, John. H.; Longstaff, Francis A.; Santa-Clara, Pedro

    2004-01-01

    We solve a model with two “Lucas trees.†Each tree has i.i.d. dividend growth. The investor has log utility and consumes the sum of the two trees’ dividends. This model produces interesting asset-pricing dynamics, despite its simple ingredients. Investors want to rebalance their portfolios after any change in value. Since the size of the trees is fixed, however, prices must adjust to offset this desire. As a result, expected returns, excess returns, and return volatility all vary throug...

  7. Species tree estimation for the late blight pathogen, Phytophthora infestans, and close relatives.

    Directory of Open Access Journals (Sweden)

    Jaime E Blair

    Full Text Available To better understand the evolutionary history of a group of organisms, an accurate estimate of the species phylogeny must be known. Traditionally, gene trees have served as a proxy for the species tree, although it was acknowledged early on that these trees represented different evolutionary processes. Discordances among gene trees and between the gene trees and the species tree are also expected in closely related species that have rapidly diverged, due to processes such as the incomplete sorting of ancestral polymorphisms. Recently, methods have been developed for the explicit estimation of species trees, using information from multilocus gene trees while accommodating heterogeneity among them. Here we have used three distinct approaches to estimate the species tree for five Phytophthora pathogens, including P. infestans, the causal agent of late blight disease in potato and tomato. Our concatenation-based "supergene" approach was unable to resolve relationships even with data from both the nuclear and mitochondrial genomes, and from multiple isolates per species. Our multispecies coalescent approach using both Bayesian and maximum likelihood methods was able to estimate a moderately supported species tree showing a close relationship among P. infestans, P. andina, and P. ipomoeae. The topology of the species tree was also identical to the dominant phylogenetic history estimated in our third approach, Bayesian concordance analysis. Our results support previous suggestions that P. andina is a hybrid species, with P. infestans representing one parental lineage. The other parental lineage is not known, but represents an independent evolutionary lineage more closely related to P. ipomoeae. While all five species likely originated in the New World, further study is needed to determine when and under what conditions this hybridization event may have occurred.

  8. Talking Trees

    Science.gov (United States)

    Tolman, Marvin

    2005-01-01

    Students love outdoor activities and will love them even more when they build confidence in their tree identification and measurement skills. Through these activities, students will learn to identify the major characteristics of trees and discover how the pace--a nonstandard measuring unit--can be used to estimate not only distances but also the…

  9. maxLik: A package for maximum likelihood estimation in R

    DEFF Research Database (Denmark)

    Henningsen, Arne; Toomet, Ott

    2011-01-01

    This paper describes the package maxLik for the statistical environment R. The package is essentially a unified wrapper interface to various optimization routines, offering easy access to likelihood-specific features like standard errors or information matrix equality (BHHH method). More advanced...

  10. Maximum likelihood cost functions for neural network models of air quality data

    Science.gov (United States)

    Dorling, Stephen R.; Foxall, Robert J.; Mandic, Danilo P.; Cawley, Gavin C.

    The prediction of episodes of poor air quality using artificial neural networks is investigated, concentrating on selection of the most appropriate cost function used in training. Different cost functions correspond to different distributional assumptions regarding the data, the appropriate choice depends on whether a forecast of absolute pollutant concentration or prediction of exceedence events is of principle importance. The cost functions investigated correspond to logistic regression, homoscedastic Gaussian (i.e. conventional sum-of-squares) regression and heteroscedastic Gaussian regression. Both linear and nonlinear neural network architectures are evaluated. While the results presented relate to a dataset describing the daily time-series of the concentration of surface level ozone (O 3) in urban Berlin, the methods applied are quite general and applicable to a wide range of pollutants and locations. The heteroscedastic Gaussian regression model outperforms the other nonlinear methods investigated; however, there is little improvement resulting from the use of nonlinear rather than linear models. Of greater significance is the flexibility afforded by the nonlinear heteroscedastic Gaussian regression model for a range of potential end-users, who may all have different answers to the question: "What is more important, correctly predicting exceedences or avoiding false alarms?".

  11. Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation

    CERN Document Server

    Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K

    2015-01-01

    We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...

  12. Pilot power optimization for AF relaying using maximum likelihood channel estimation

    KAUST Repository

    Wang, Kezhi

    2014-09-01

    Bit error rates (BERs) for amplify-and-forward (AF) relaying systems with two different pilot-symbol-aided channel estimation methods, disintegrated channel estimation (DCE) and cascaded channel estimation (CCE), are derived in Rayleigh fading channels. Based on these BERs, the pilot powers at the source and at the relay are optimized when their total transmitting powers are fixed. Numerical results show that the optimized system has a better performance than other conventional nonoptimized allocation systems. They also show that the optimal pilot power in variable gain is nearly the same as that in fixed gain for similar system settings. andcopy; 2014 IEEE.

  13. Maximum likelihood estimators uniformly minimize distribution variance among distribution unbiased estimators in exponential families

    OpenAIRE

    De Vos, Paul; Wu, Qiang

    2015-01-01

    We employ a parameter-free distribution estimation framework where estimators are random distributions and utilize the Kullback–Leibler (KL) divergence as a loss function. Wu and Vos [ J. Statist. Plann. Inference 142 (2012) 1525–1536] show that when an estimator obtained from an i.i.d. sample is viewed as a random distribution, the KL risk of the estimator decomposes in a fashion parallel to the mean squared error decomposition when the estimator is a real-valued random variable. In th...

  14. A Fast Algorithm for Maximum Likelihood-based Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom;

    2015-01-01

    Print Request Permissions Periodic signals are encountered in many applications. Such signals can be modelled by a weighted sum of sinusoidal components whose frequencies are integer multiples of a fundamental frequency. Given a data set, the fundamental frequency can be estimated in many ways...

  15. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography

    NARCIS (Netherlands)

    A.H. Curiale (Ariel H.); G. Vegas-Sanchez-Ferrero (Gonzalo); J.G. Bosch (Johan); S. Aja-Fernández (Santiago)

    2015-01-01

    textabstractThe strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle b

  16. Adaptive wave filtering for dynamic positioning of marine vessels using maximum likelihood identification: Theory and experiments

    Digital Repository Service at National Institute of Oceanography (India)

    Hassani, V.; Sorensen, A.J.; Pascoal, A.M.

    as spofisor of kApglrci afid AkmSL margin. An improvement in performance was achieved by exploiting more advanced control techniques based on optimal control and Kalman filter (KF) theory, see Balchen et al. (1976). These techniques were later mod- ified...). In Sørensen et al. (1996), Wave Filtering (WF) was done by exploiting the use of KF theory under the assumption that the kinematic equations of the ship’s motion can be linearized about a set of predefined constant yaw angles (36 operating points in steps...

  17. Missing Data Imputation versus Full Information Maximum Likelihood with Second-Level Dependencies

    Science.gov (United States)

    Larsen, Ross

    2011-01-01

    Missing data in the presence of upper level dependencies in multilevel models have never been thoroughly examined. Whereas first-level subjects are independent over time, the second-level subjects might exhibit nonzero covariances over time. This study compares 2 missing data techniques in the presence of a second-level dependency: multiple…

  18. Strategies for Handling Missing Data with Maximum Likelihood Estimation in Career and Technical Education Research

    Science.gov (United States)

    Lee, In Heok

    2012-01-01

    Researchers in career and technical education often ignore more effective ways of reporting and treating missing data and instead implement traditional, but ineffective, missing data methods (Gemici, Rojewski, & Lee, 2012). The recent methodological, and even the non-methodological, literature has increasingly emphasized the importance of…

  19. Maximum-Likelihood Sequence Detector for Dynamic Mode High Density Probe Storage

    CERN Document Server

    Kumar, Naveen; Ramamoorthy, Aditya; Salapaka, Murti

    2009-01-01

    There is an ever increasing need for storing data in smaller and smaller form factors driven by the ubiquitous use and increased demands of consumer electronics. A new approach of achieving a few Tb per in2 areal densities, utilizes a cantilever probe with a sharp tip that can be used to deform and assess the topography of the material. The information may be encoded by means of topographic profiles on a polymer medium. The prevalent mode of using the cantilever probe is the static mode that is known to be harsh on the probe and the media. In this paper, the high quality factor dynamic mode operation, which is known to be less harsh on the media and the probe, is analyzed for probe based high density data storage purposes. It is demonstrated that an appropriate level of abstraction is possible that obviates the need for an involved physical model. The read operation is modeled as a communication channel which incorporates the inherent system memory due to the intersymbol interference and the cantilever state ...

  20. Full-information Maximum Likelihood Estimation of Brand Positioning Maps Using Supermarkt Scanning Data

    NARCIS (Netherlands)

    E. Waarts (Eric); M.A. Carree (Martin); B. Wierenga (Berend)

    1991-01-01

    textabstractThe authors build on the idea put forward by Shugan to infer product maps from scanning data. They demonstrate that the actual estimation procedure used by Shugan has several methodological problems and may yield unstable estimates. They propose an alternative estimation procedure, full-

  1. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power, trunca

  2. Application of a maximum likelihood type estimator to the towed array shape estimation problem

    NARCIS (Netherlands)

    Been, R.

    1996-01-01

    At the TNO Physics and Electronics Laboratory (TNO-FEL), for a number of decades, the behaviour and performance of towed sonar systems has been studied extensively. Since the performance of towed sonars highly depends on the shape of the hydrophone array, the underwater acoustics group started perfo

  3. Estimating Water Demand in Urban Indonesia: A Maximum Likelihood Approach to block Rate Pricing Data

    NARCIS (Netherlands)

    Rietveld, Piet; Rouwendal, Jan; Zwart, Bert

    1997-01-01

    In this paper the Burtless and Hausman model is used to estimate water demand in Salatiga, Indonesia. Other statistical models, as OLS and IV, are found to be inappropiate. A topic, which does not seem to appear in previous studies, is the fact that the density function of the loglikelihood can be m

  4. A Maximum Likelihood Approach for Multisample Nonlinear Structural Equation Models with Missing Continuous and Dichotomous Data

    Science.gov (United States)

    Song, Xin-Yuan; Lee, Sik-Yum

    2006-01-01

    Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…

  5. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd

  6. Stochastic identification using the maximum likelihood method and a statistical reduction: application to drilling dynamics

    OpenAIRE

    2010-01-01

    International audience; A drill-string is a slender structure that drills rock to search for oil. The nonlinear interaction between the bit and the rock is of great importance for the drill-string dynamics. The interaction model has uncertainties, which are modeled using the nonparametric probabilistic approach. This paper deals with a procedure to perform the identification of the dispersion parameter of the probabilistic model of uncertainties of a bit-rock interaction model. The bit-rock i...

  7. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations

    Science.gov (United States)

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…

  8. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations

    NARCIS (Netherlands)

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influ

  9. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    CERN Document Server

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  10. Fast maximum likelihood estimate of the Kriging correlation range in the frequency domain

    NARCIS (Netherlands)

    De Baar, J.H.S.; Dwight, R.P.; Bijl, H.

    2011-01-01

    We apply Ordinary Kriging to predict 75,000 terrain survey data from a randomly sampled subset of < 2500 observations. Since such a Kriging prediction requires a considerable amount of CPU time, we aim to reduce its computational cost. In a conventional approach, the cost of the Kriging analysis wou

  11. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    Science.gov (United States)

    Gu, Fei; Wu, Hao

    2016-09-01

    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.

  12. A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    The blood flow in the human cardiovascular system obeys the laws of fluid mechanics. Investigation of the flow properties reveals that a correlation exists between the velocity in time and space. The possible changes in velocity are limited, since the blood velocity has a continuous profile in time...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...

  13. Maximum likelihood amplitude scale estimation for quantization-based watermarking in the presence of Dither

    NARCIS (Netherlands)

    Shterev, I.D.; Lagendijk, R.L.

    2005-01-01

    Quantization-based watermarking schemes comprise a class of watermarking schemes that achieves the channel capacity in terms of additive noise attacks.1 The existence of good high dimensional lattices that can be efficiently implemented2–4 and incorporated into watermarking structures, made quantiza

  14. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YUE Li; CHEN Xiru

    2004-01-01

    Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.

  15. Efficient strategies for genome scanning using maximum-likelihood affected-sib-pair analysis

    Energy Technology Data Exchange (ETDEWEB)

    Holmans, P.; Craddock, N. [Univ. of Wales College of Medicine, Cardiff (United Kingdom)

    1997-03-01

    Detection of linkage with a systematic genome scan in nuclear families including an affected sibling pair is an important initial step on the path to cloning susceptibility genes for complex genetic disorders, and it is desirable to optimize the efficiency of such studies. The aim is to maximize power while simultaneously minimizing the total number of genotypings and probability of type I error. One approach to increase efficiency, which has been investigated by other workers, is grid tightening: a sample is initially typed using a coarse grid of markers, and promising results are followed up by use of a finer grid. Another approach, not previously considered in detail in the context of an affected-sib-pair genome scan for linkage, is sample splitting: a portion of the sample is typed in the screening stage, and promising results are followed up in the whole sample. In the current study, we have used computer simulation to investigate the relative efficiency of two-stage strategies involving combinations of both grid tightening and sample splitting and found that the optimal strategy incorporates both approaches. In general, typing half the sample of affected pairs with a coarse grid of markers in the screening stage is an efficient strategy under a variety of conditions. If Hardy-Weinberg equilibrium holds, it is most efficient not to type parents in the screening stage. If Hardy-Weinberg equilibrium does not hold (e.g., because of stratification) failure to type parents in the first stage increases the amount of genotyping required, although the overall probability of type I error is not greatly increased, provided the parents are used in the final analysis. 23 refs., 4 figs., 5 tabs.

  16. Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds

    Science.gov (United States)

    Conroy, M.J.; Morgan, B.J.T.; North, P.M.

    1985-01-01

    It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.

  17. Use of the Maximum Likelihood Method in the Analysis of Chamber Air Dives

    Science.gov (United States)

    1988-01-01

    de probabilidad maxima para evaluar ci niesgo de enfermedad por des- t,- 1.WalersbyP. doeo (EPD) onimrinsd aaacnaire electivas. Sc optmz los...parametros -Medic Rcsearc dedsmodelos matemnaticos para predecir la EPD, hasta que se alcanzo el mejor critenio tute, 198 (medido por la probabilidad maxima

  18. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    Science.gov (United States)

    1990-11-01

    findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate

  19. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  20. Efficient and exact maximum likelihood quantisation of genomic features using dynamic programming.

    Science.gov (United States)

    Song, Mingzhou; Haralick, Robert M; Boissinot, Stéphane

    2010-01-01

    An efficient and exact dynamic programming algorithm is introduced to quantise a continuous random variable into a discrete random variable that maximises the likelihood of the quantised probability distribution for the original continuous random variable. Quantisation is often useful before statistical analysis and modelling of large discrete network models from observations of multiple continuous random variables. The quantisation algorithm is applied to genomic features including the recombination rate distribution across the chromosomes and the non-coding transposable element LINE-1 in the human genome. The association pattern is studied between the recombination rate, obtained by quantisation at genomic locations around LINE-1 elements, and the length groups of LINE-1 elements, also obtained by quantisation on LINE-1 length. The exact and density-preserving quantisation approach provides an alternative superior to the inexact and distance-based univariate iterative k-means clustering algorithm for discretisation.

  1. Detecting specific genotype by environment interaction using marginal maximum likelihood estimation in the classical twin design

    NARCIS (Netherlands)

    D. Molenaar; S. van der Sluis; D.I. Boomsma; C.V. Dolan

    2012-01-01

    Considerable effort has been devoted to the analysis of genotype by environment (G × E) interactions in various phenotypic domains, such as cognitive abilities and personality. In many studies, environmental variables were observed (measured) variables. In case of an unmeasured environment, van der

  2. Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J

    2007-01-01

    in the mutation rate from C to T. However, neither a reduction in the strength of selection nor a change in the mutational pattern can alone explain all of the data observed in the D. melanogaster lineage. For example, we also confirm previous results showing that the Notch locus has experienced positive....... melanogaster lineage has experienced a reduction in the selection for optimal codon usage. However, the D. melanogaster lineage has also experienced a change in the biological mutation rates relative to D. simulans, in particular, a relative reduction in the mutation rate from A to G and an increase...... selection for previously classified unpreferred mutations....

  3. Maximum Likelihood Estimates of Parameters in Various Types of Distribution Fitted to Important Data Cases.

    OpenAIRE

    Hirose, Hideo

    1998-01-01

    TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...

  4. APLIKASI KORELASI PEARSON DALAM MEMBANGUN MODEL TREE-AUGMENTED NETWORK (TAN (Studi Kasus Pengenalan Karakter Tulisan Tangan

    Directory of Open Access Journals (Sweden)

    Irwan Budi Santoso

    2013-10-01

    Full Text Available Langkah pertama dalam membangun model pengenalan Tree-Augmented Network (TAN  dengan  mengukur  besarnya  hubungan  diantara  pasangan  fitur  objek.  Salah  satu metode yang dapat digunakan mengukur besarnya keeratan hubungan secara linier diantara pasangan fitur adalah   Korelasi Pearson. Aplikasi Korelasi Pearson  dalam membangun model Tree-Augmented Network (TAN dalam penelitian ini, akan diujicobakan pada kasus membangun  model pengenalan karakter tulisan tangan. Data fitur karakter tulisan tangan untuk kasus ini, diasumsikan mengikuti distribusi gaussian karena estimasi parameter model pengenalannya menggunakan estimator Maximum Likelihood (ML. Hasil eksperimen dengan menggunakan data training yang terdiri dari 5 jenis karakter tulisan tangan, menunjukkan untuk dimensi fitur karakter tulisan tangan 10x30 (30 fitur, akurasi sistem Korelasi Pearson dalam membangun model TAN untuk mengenali karakter tulisan tangan  sebesar 88 %.

  5. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor

    2016-07-28

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.

  6. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    Science.gov (United States)

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.

  7. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    Directory of Open Access Journals (Sweden)

    Minerva Singh

    Full Text Available At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA was used (using multiresolution segmentation to delineate individual tree crowns from very-high-resolution (VHR aerial imagery and light detection and ranging (LiDAR data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively. Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%, whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM and maximum likelihood (ML classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.

  8. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  9. Phylogenetic trees

    OpenAIRE

    Baños, Hector; Bushek, Nathaniel; Davidson, Ruth; Gross, Elizabeth; Harris, Pamela E.; Krone, Robert; Long, Colby; Stewart, Allen; WALKER, Robert

    2016-01-01

    We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals.

  10. Phylogenetic Trees From Sequences

    Science.gov (United States)

    Ryvkin, Paul; Wang, Li-San

    In this chapter, we review important concepts and approaches for phylogeny reconstruction from sequence data.We first cover some basic definitions and properties of phylogenetics, and briefly explain how scientists model sequence evolution and measure sequence divergence. We then discuss three major approaches for phylogenetic reconstruction: distance-based phylogenetic reconstruction, maximum parsimony, and maximum likelihood. In the third part of the chapter, we review how multiple phylogenies are compared by consensus methods and how to assess confidence using bootstrapping. At the end of the chapter are two sections that list popular software packages and additional reading.

  11. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... Applications of our general framework include those from number theory (classical, complex, p-adic and formal power series) and dynamical systems (iterated function schemes, rational maps and Kleinian groups)....

  12. Game tree algorithms and solution trees

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim); A. de Bruin (Arie)

    1998-01-01

    textabstractIn this paper, a theory of game tree algorithms is presented, entirely based upon the concept of solution tree. Two types of solution trees are distinguished: max and min trees. Every game tree algorithm tries to prune nodes as many as possible from the game tree. A cut-off criterion in

  13. Electron Tree

    DEFF Research Database (Denmark)

    Appelt, Ane L; Rønde, Heidi S

    2013-01-01

    The photo shows a close-up of a Lichtenberg figure – popularly called an “electron tree” – produced in a cylinder of polymethyl methacrylate (PMMA). Electron trees are created by irradiating a suitable insulating material, in this case PMMA, with an intense high energy electron beam. Upon discharge......, during dielectric breakdown in the material, the electrons generate branching chains of fractures on leaving the PMMA, producing the tree pattern seen. To be able to create electron trees with a clinical linear accelerator, one needs to access the primary electron beam used for photon treatments. We...... appropriated a linac that was being decommissioned in our department and dismantled the head to circumvent the target and ion chambers. This is one of 24 electron trees produced before we had to stop the fun and allow the rest of the accelerator to be disassembled....

  14. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  15. Evaluation of Gaussian approximations for data assimilation in reservoir models

    KAUST Repository

    Iglesias, Marco A.

    2013-07-14

    The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our

  16. A time-calibrated, multi-locus phylogeny of piranhas and pacus (Characiformes: Serrasalmidae) and a comparison of species tree methods.

    Science.gov (United States)

    Thompson, Andrew W; Betancur-R, Ricardo; López-Fernández, Hernán; Ortí, Guillermo

    2014-12-01

    The phylogeny of piranhas, pacus, and relatives (family Serrasalmidae) was inferred on the basis of DNA sequences from eleven gene fragments that include the mitochondrial control region plus 10 nuclear genes (two exons and eight introns). The new data were obtained for a representative sampling of 53 specimens, collected from all major South American rivers, accounting for over 40% of the valid species and all genera excluding Utiaritichthys. Two fossil calibration points and relaxed-clock Bayesian analyses were used to estimate the timing of diversification. The new multilocus dataset also is used to compare several species-tree approaches against the results obtained using the concatenated alignment analyzed under maximum likelihood and Bayesian inference. Individual gene trees showed substantial topological discordance, but analyses based on concatenation and Bayesian and maximum likelihood-based species trees approaches converged onto a single phylogeny. The resulting phylogenetic hypothesis is robust and supports a division of the family into three major clades, consistent with previous results based on mitochondrial DNA alone. The earliest branching event separated a "pacu" clade (Colossoma, Mylossoma and Piaractus) from the rest of the family in the Late Cretaceous (over 68 Ma). The other two clades, that contain most of the diversity, are formed by the "true piranhas" (Metynnis, Pygopristis, Pygocentrus, Pristobrycon, Catoprion, and Serrasalmus) and the Myleus-like pacus (the Myleus clade). The "true" piranha clade originated during the Eocene (∼53 Ma) but the most recent diversification of flesh-eating piranhas within the genera Serrasalmus and Pygocentrus did not start until the Miocene (∼17 Ma). A comparison of species tree approaches indicates that most methods tested are consistent with results obtained by concatenation, suggesting that the gene-tree incongruence observed is mild and will not produce misleading results under simple concatenation

  17. Interpreting Tree Ensembles with inTrees

    OpenAIRE

    Deng, Houtao

    2014-01-01

    Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification an...

  18. Optimal Belief Approximation

    CERN Document Server

    Leike, Reimar H

    2016-01-01

    In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a ranking function that quantifies how "embarrassing" it is to communicate a given approximation. We show that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. We find that this ranking is equivalent to the Kullback-Leibler divergence that is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. We hope that our elementary derivation settles the apparent confusion. We show for example that when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many su...

  19. A likelihood perspective on tree-ring standardization: eliminating modern sample bias

    Science.gov (United States)

    Cecile, J.; Pagnutti, C.; Anand, M.

    2013-08-01

    It has recently been suggested that non-random sampling and differences in mortality between trees of different growth rates is responsible for a widespread, systematic bias in dendrochronological reconstructions of tree growth known as modern sample bias. This poses a serious challenge for climate reconstruction and the detection of long-term changes in growth. Explicit use of growth models based on regional curve standardization allow us to investigate the effects on growth due to age (the regional curve), year (the standardized chronology or forcing) and a new effect, the productivity of each tree. Including a term for the productivity of each tree accounts for the underlying cause of modern sample bias, allowing for more reliable reconstruction of low-frequency variability in tree growth. This class of models describes a new standardization technique, fixed effects standardization, that contains both classical regional curve standardization and flat detrending. Signal-free standardization accounts for unbalanced experimental design and fits the same growth model as classical least-squares or maximum likelihood regression techniques. As a result, we can use powerful and transparent tools such as R2 and Akaike's Information Criteria to assess the quality of tree ring standardization, allowing for objective decisions between competing techniques. Analyzing 1200 randomly selected published chronologies, we find that regional curve standardization is improved by adding an effect for individual tree productivity in 99% of cases, reflecting widespread differing-contemporaneous-growth rate bias. Furthermore, modern sample bias produced a significant negative bias in estimated tree growth by time in 70.5% of chronologies and a significant positive bias in 29.5% of chronologies. This effect is largely concentrated in the last 300 yr of growth data, posing serious questions about the homogeneity of modern and ancient chronologies using traditional standardization

  20. On Element SDD Approximability

    CERN Document Server

    Avron, Haim; Toledo, Sivan

    2009-01-01

    This short communication shows that in some cases scalar elliptic finite element matrices cannot be approximated well by an SDD matrix. We also give a theoretical analysis of a simple heuristic method for approximating an element by an SDD matrix.