On penalized maximum likelihood estimation of approximate factor models
Wang, Shaoxin; Yang, Hu; Yao, Chaoli
2016-01-01
In this paper, we mainly focus on the estimation of high-dimensional approximate factor model. We rewrite the estimation of error covariance matrix as a new form which shares similar properties as the penalized maximum likelihood covariance estimator given by Bien and Tibshirani(2011). Based on the lagrangian duality, we propose an APG algorithm to give a positive definite estimate of the error covariance matrix. The new algorithm for the estimation of approximate factor model has a desirable...
On the approximate maximum likelihood estimation for diffusion processes
Chang, Jinyuan; 10.1214/11-AOS922
2012-01-01
The transition density of a diffusion process does not admit an explicit expression in general, which prevents the full maximum likelihood estimation (MLE) based on discretely observed sample paths. A\\"{\\i}t-Sahalia [J. Finance 54 (1999) 1361--1395; Econometrica 70 (2002) 223--262] proposed asymptotic expansions to the transition densities of diffusion processes, which lead to an approximate maximum likelihood estimation (AMLE) for parameters. Built on A\\"{\\i}t-Sahalia's [Econometrica 70 (2002) 223--262; Ann. Statist. 36 (2008) 906--937] proposal and analysis on the AMLE, we establish the consistency and convergence rate of the AMLE, which reveal the roles played by the number of terms used in the asymptotic density expansions and the sampling interval between successive observations. We find conditions under which the AMLE has the same asymptotic distribution as that of the full MLE. A first order approximation to the Fisher information matrix is proposed.
Approximate Maximum Likelihood Commercial Bank Loan Management Model
Directory of Open Access Journals (Sweden)
Godwin N.O. Asemota
2009-01-01
Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.
Yang, Z
1994-09-01
Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good, and four such categories appear to be sufficient to produce both an optimum, or near-optimum fit by the model to the data, and also an acceptable approximation to the continuous distribution. The second method, called "fixed-rates model", classifies sites into several classes according to their rates predicted assuming the star tree. Sites in different classes are then assumed to be evolving at these fixed rates when other tree topologies are evaluated. Analyses of the data sets suggest that this method can produce reasonable results, but it seems to share some properties of a least-squares pairwise comparison; for example, interior branch lengths in nonbest trees are often found to be zero. The computational requirements of the two methods are comparable to that of Felsenstein's (1981, J Mol Evol 17:368-376) model, which assumes a single rate for all the sites. PMID:7932792
W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis.
Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang
2016-07-01
This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at It is free and open to all users and there is no login requirement. PMID:27084950
On approximate pseudo-maximum likelihood estimation for LARCH-processes
Beran, Jan; 10.3150/09-BEJ189
2010-01-01
Linear ARCH (LARCH) processes were introduced by Robinson [J. Econometrics 47 (1991) 67--84] to model long-range dependence in volatility and leverage. Basic theoretical properties of LARCH processes have been investigated in the recent literature. However, there is a lack of estimation methods and corresponding asymptotic theory. In this paper, we consider estimation of the dependence parameters for LARCH processes with non-summable hyperbolically decaying coefficients. Asymptotic limit theorems are derived. A central limit theorem with $\\sqrt{n}$-rate of convergence holds for an approximate conditional pseudo-maximum likelihood estimator. To obtain a computable version that includes observed values only, a further approximation is required. The computable estimator is again asymptotically normal, however with a rate of convergence that is slower than $\\sqrt{n}.$
Directory of Open Access Journals (Sweden)
Kodner Robin B
2010-10-01
Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.
Vestige: Maximum likelihood phylogenetic footprinting
Directory of Open Access Journals (Sweden)
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Shrinkage Effect in Ancestral Maximum Likelihood
Mossel, Elchanan; Steel, Mike
2008-01-01
Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum-likelihood absorption tomography
International Nuclear Information System (INIS)
Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
The Maximum Likelihood Threshold of a Graph
Gross, Elizabeth; Sullivant, Seth
2014-01-01
The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
''No-background'' maximum likelihood analysis in HBT interferometry
International Nuclear Information System (INIS)
We present a new 'no-background' procedure, based on the maximum likelihood method, for fitting the space-time size parameters of the particle production region in ultra-relativistic heavy ion collisions. This procedure uses an approximation to avoid the necessity of constructing a mixed-event background before fitting the data. (orig.)
Maximum likelihood estimation of search costs
Moraga González, José; Wildenbeest, Matthijs R.
2008-01-01
In a recent paper Hong and Shum [2006. Using price distributions to estimate search costs. Rand Journal of Economics 37, 257-275] present a structural method to estimate search cost distributions. We extend their approach to the case of oligopoly and present a new maximum likelihood method to estima
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
A maximum likelihood method for particle momentum determination
International Nuclear Information System (INIS)
We discuss a maximum likelihood method for determining a charged particle's momentum as it moves in a magnetic field. The formalism is presented in both rigorous and approximate forms. The rigorous form is valid when random processes include multiple scattering, energy loss and detector spatial resolution. When the measurement error is dominated by multiple scattering, it takes a particularly simple approximate form. The validity of both formalisms extends to include non-Gaussian multiple scattering distribution
Model Fit after Pairwise Maximum Likelihood.
Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J
2016-01-01
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
. This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and......In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics...
Accurate structural correlations from maximum likelihood superpositions.
Directory of Open Access Journals (Sweden)
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Reconstruction of diagonal elements of density matrix using maximum likelihood estimation
International Nuclear Information System (INIS)
The data of the experiment of Schiller et al., Physic Letters 77(1996), are alternatively evaluated using the maximum likelihood estimation. The given data are fitted better than by the standard deterministic approach. Nevertheless, the data are fitted equally well by a whole family of states. Standard deterministic predictions correspond approximately to the envelope of these maximum likelihood solutions. (author)
Maximum likelihood estimation of fractionally cointegrated systems
DEFF Research Database (Denmark)
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the...... equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...... any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does...
Improved maximum likelihood reconstruction of complex multi-generational pedigrees.
Sheehan, Nuala A; Bartlett, Mark; Cussens, James
2014-11-01
The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as
Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.
Chor, Benny; Snir, Sagi
2004-12-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.
Maximum likelihood polynomial regression for robust speech recognition
Institute of Scientific and Technical Information of China (English)
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Study on the Hungarian algorithm for the maximum likelihood data association problem
Institute of Scientific and Technical Information of China (English)
Wang Jianguo; He Peikun; Cao Wei
2007-01-01
A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the na(i)ve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Blind Detection of Ultra-faint Streaks with a Maximum Likelihood Method
Dawson, William A; Kamath, Chandrika
2016-01-01
We have developed a maximum likelihood source detection method capable of detecting ultra-faint streaks with surface brightnesses approximately an order of magnitude fainter than the pixel level noise. Our maximum likelihood detection method is a model based approach that requires no a priori knowledge about the streak location, orientation, length, or surface brightness. This method enables discovery of typically undiscovered objects, and enables the utilization of low-cost sensors (i.e., higher-noise data). The method also easily facilitates multi-epoch co-addition. We will present the results from the application of this method to simulations, as well as real low earth orbit observations.
Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative Grenzwert
Johannes, Jan
2002-01-01
Es sei X eine Zufallsvariable mit unbekannter Verteilung P. Zu den Hauptaufgaben der Mathematischen Statistik zählt die Konstruktion von Schätzungen für einen abgeleiteten Parameter theta(P) mit Hilfe einer Beobachtung X=x. Im Fall einer dominierten Verteilungsfamilie ist es möglich, das Maximum-Likelihood-Prinzip (MLP) anzuwenden. Eine Alternative dazu liefert der Bayessche Zugang. Insbesondere erweist sich unter Regularitätsbedingungen, dass die Maximum-Likelihood-Schätzung (MLS) dem Gren...
Modified maximum likelihood registration based on information fusion
Institute of Scientific and Technical Information of China (English)
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Penalized maximum likelihood estimation for generalized linear point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2010-01-01
-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...
Bias Correction for Alternating Iterative Maximum Likelihood Estimators
Institute of Scientific and Technical Information of China (English)
Gang YU; Wei GAO; Ningzhong SHI
2013-01-01
In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.
A Rayleigh Doppler frequency estimator derived from maximum likelihood theory
DEFF Research Database (Denmark)
Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul
1999-01-01
capacities in low and high speed situations. We derive a Doppler frequency estimator using the maximum likelihood method and Jakes model (1974) of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verified through simulations and found to yield good...
A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory
DEFF Research Database (Denmark)
Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul
1999-01-01
capacities in low and high speed situations.We derive a Doppler frequency estimatorusing the maximum likelihood method and Jakes model [\\ref{Jakes}] of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verifiedthrough simulations and found to yield...
Maximum likelihood estimation of phase-type distributions
DEFF Research Database (Denmark)
Esparza, Luz Judith R
This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...
Maximum likelihood estimation of the attenuated ultrasound pulse
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated ultra...
Tree wavelet approximations with applications
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
[1]Baraniuk, R. G., DeVore, R. A., Kyriazis, G., Yu, X. M., Near best tree approximation, Adv. Comput. Math.,2002, 16: 357-373.[2]Cohen, A., Dahmen, W., Daubechies, I., DeVore, R., Tree approximation and optimal encoding, Appl. Comput.Harmonic Anal., 2001, 11: 192-226.[3]Dahmen, W., Schneider, R., Xu, Y., Nonlinear functionals of wavelet expansions-adaptive reconstruction and fast evaluation, Numer. Math., 2000, 86: 49-101.[4]DeVore, R. A., Nonlinear approximation, Acta Numer., 1998, 7: 51-150.[5]Davis, G., Mallat, S., Avellaneda, M., Adaptive greedy approximations, Const. Approx., 1997, 13: 57-98.[6]DeVore, R. A., Temlyakov, V. N., Some remarks on greedy algorithms, Adv. Comput. Math., 1996, 5: 173-187.[7]Kashin, B. S., Temlyakov, V. N., Best m-term approximations and the entropy of sets in the space L1, Mat.Zametki (in Russian), 1994, 56: 57-86.[8]Temlyakov, V. N., The best m-term approximation and greedy algorithms, Adv. Comput. Math., 1998, 8:249-265.[9]Temlyakov, V. N., Greedy algorithm and m-term trigonometric approximation, Constr. Approx., 1998, 14:569-587.[10]Hutchinson, J. E., Fractals and self similarity, Indiana. Univ. Math. J., 1981, 30: 713-747.[11]Binev, P., Dahmen, W., DeVore, R. A., Petruchev, P., Approximation classes for adaptive methods, Serdica Math.J., 2002, 28: 1001-1026.[12]Gilbarg, D., Trudinger, N. S., Elliptic Partial Differential Equations of Second Order, Berlin: Springer-Verlag,1983.[13]Ciarlet, P. G., The Finite Element Method for Elliptic Problems, New York: North Holland, 1978.[14]Birman, M. S., Solomiak, M. Z., Piecewise polynomial approximation of functions of the class Wαp, Math. Sb.,1967, 73: 295-317.[15]DeVore, R. A., Lorentz, G. G., Constructive Approximation, New York: Springer-Verlag, 1993.[16]DeVore, R. A., Popov, V., Interpolation of Besov spaces, Trans. Amer. Math. Soc., 1988, 305: 397-414.[17]Devore, R., Jawerth, B., Popov, V., Compression of wavelet decompositions, Amer. J. Math., 1992, 114: 737-785.[18]Storozhenko, E
$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
A scalable maximum likelihood method for quantum state tomography
International Nuclear Information System (INIS)
The principle of maximum likelihood reconstruction has proven to yield satisfactory results in the context of quantum state tomography for many-body systems of moderate system sizes. Until recently, however, quantum state tomography has been considered to be infeasible for systems consisting of a large number of subsystems due to the exponential growth of the Hilbert space dimension with the number of constituents. Several reconstruction schemes have been proposed since then to overcome the two main obstacles in quantum many-body tomography: experiment time and post-processing resources. Here we discuss one strategy to address these limitations for the maximum likelihood principle by adopting a particular state representation to merge a well established reconstruction algorithm maximizing the likelihood with techniques known from quantum many-body theory. (paper)
GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS
Directory of Open Access Journals (Sweden)
S. Sridevi
2013-02-01
Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
Maximum Likelihood Estimation in Panels with Incidental Trends
Moon, Hyungsik; Phillips, Peter C. B.
1999-01-01
It is shown that the maximum likelihood estimator of a local to unity parameter can be consistently estimated with panel data when the cross section observations are independent. Consistency applies when there are no deterministic trends or when there is a homogeneous deterministic trend in the panel model. When there are heterogeneous deterministic trends the panel MLE of the local to unity parameter is inconsistent. This outcome provides a new instance of inconsistent ML estimation in dynam...
Monte Carlo maximum likelihood estimation for discretely observed diffusion processes
Beskos, Alexandros; Papaspiliopoulos, Omiros; Roberts, Gareth
2009-01-01
This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s.\\@ continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte C...
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
Maximum-likelihood estimation of recent shared ancestry (ERSA)
Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L; Woodward, Scott R.; Jorde, Lynn B
2011-01-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number a...
Maximum Likelihood and the Bootstrap for Nonlinear Dynamic Models
Goncalves, Silvia; White, Halbert
2002-01-01
The bootstrap is an increasingly popular method for performing statistical inference. This paper provides the theoretical foundation for using the bootstrap as a valid tool of inference for quasi-maximum likelihood estimators (QMLE). We provide a unified framework for analyzing bootstrapped extremum estimators of nonlinear dynamic models for heterogeneous dependent stochastic processes. We apply our results to two block bootstrap methods, the moving blocks bootstrap of Künsch (1989) and Liu a...
The Multivariate Watson Distribution: Maximum-Likelihood Estimation and other Aspects
Sra, Suvrit
2011-01-01
This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where $\\pm \\x$ are equivalent), for high-dimensions using them can be difficult. Why so? Largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived---but these are either grossly inaccurate in high-dimensions (\\emph{Directional Statistics}, Mardia & Jupp. 2000) or when reasonably accurate (\\emph{J. Machine Learning Research, W. & C.P., v2}, Bijral \\emph{et al.}, 2007, pp. 35--42), they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover...
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
Maximum Likelihood Estimation for an Innovation Diffusion Model of New Product Acceptance
David C Schmittlein; Vijay Mahajan
1982-01-01
A maximum likelihood approach is proposed for estimating an innovation diffusion model of new product acceptance originally considered by Bass (Bass, F. M. 1969. A new product growth model for consumer durables. (January) 215–227.). The suggested approach allows: (1) computation of approximate standard errors for the diffusion model parameters, and (2) determination of the required sample size for forecasting the adoption level to any desired degree of accuracy. Using histograms from eight di...
Operational risk models and maximum likelihood estimation error for small sample-sizes
Paul Larsen
2015-01-01
Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by as...
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2
Maximum Likelihood Localization of Radiation Sources with unknown Source Intensity
Baidoo-Williams, Henry E
2016-01-01
In this paper, we consider a novel and robust maximum likelihood approach to localizing radiation sources with unknown statistics of the source signal strength. The result utilizes the smallest number of sensors required theoretically to localize the source. It is shown, that should the source lie in the open convex hull of the sensors, precisely $N+1$ are required in $\\mathbb{R}^N, ~N \\in \\{1,\\cdots,3\\}$. It is further shown that the region of interest, the open convex hull of the sensors, is entirely devoid of false stationary points. An augmented gradient ascent algorithm with random projections should an estimate escape the convex hull is presented.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.
MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL
Directory of Open Access Journals (Sweden)
Vinod Kumar
2010-01-01
Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Directory of Open Access Journals (Sweden)
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data
Agnese, R; Balakishiyeva, D; Thakur, R Basu; Bauer, D A; Billard, J; Borgland, A; Bowles, M A; Brandt, D; Brink, P L; Bunker, R; Cabrera, B; Caldwell, D O; Cerdeno, D G; Chagani, H; Chen, Y; Cooley, J; Cornell, B; Crewdson, C H; Cushman, P; Daal, M; Di Stefano, P C F; Doughty, T; Esteban, L; Fallows, S; Figueroa-Feliciano, E; Fritts, M; Godfrey, G L; Golwala, S R; Graham, M; Hall, J; Harris, H R; Hertel, S A; Hofer, T; Holmgren, D; Hsu, L; Huber, M E; Jastram, A; Kamaev, O; Kara, B; Kelsey, M H; Kennedy, A; Kiveni, M; Koch, K; Leder, A; Loer, B; Asamar, E Lopez; Mahapatra, R; Mandic, V; Martinez, C; McCarthy, K A; Mirabolfathi, N; Moffatt, R A; Moore, D C; Nelson, R H; Oser, S M; Page, K; Page, W A; Partridge, R; Pepin, M; Phipps, A; Prasad, K; Pyle, M; Qiu, H; Rau, W; Redl, P; Reisetter, A; Ricci, Y; Rogers, H E; Saab, T; Sadoulet, B; Sander, J; Schneck, K; Schnee, R W; Scorza, S; Serfass, B; Shank, B; Speller, D; Upadhyayula, S; Villano, A N; Welliver, B; Wright, D H; Yellin, S; Yen, J J; Young, B A; Zhang, J
2014-01-01
We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.
Preliminary attempt on maximum likelihood tomosynthesis reconstruction of DEI data
International Nuclear Information System (INIS)
Tomosynthesis is a three-dimension reconstruction method that can remove the effect of superimposition with limited angle projections. It is especially promising in mammography where radiation dose is concerned. In this paper, we propose a maximum likelihood tomosynthesis reconstruction algorithm (ML-TS) on the apparent absorption data of diffraction enhanced imaging (DEI). The motivation of this contribution is to develop a tomosynthesis algorithm in low-dose or noisy circumstances and make DEI get closer to clinic application. The theoretical statistical models of DEI data in physics are analyzed and the proposed algorithm is validated with the experimental data at the Beijing Synchrotron Radiation Facility (BSRF). The results of ML-TS have better contrast compared with the well known 'shift-and-add' algorithm and FBP algorithm. (authors)
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
Predicting unprotected reactor upset response using the maximum likelihood method
International Nuclear Information System (INIS)
A number of advanced reactor concepts incorporate intrinsic design features that act to safely limit reactor response during upsets. In the integral fast reactor (IFR) concept, for example, metallic fuel is used to provide sufficient negative reactivity feedback to achieve a safe response for a number of unprotected upsets. In reactors such as the IFR that rely on passive features for part of their safety, the licensing of these systems will probably require that they be periodically tested to verify proper operation. Commercial light water plants have similar requirements for active safety systems. The approach to testing considered in this paper involves determining during normal operation the values of key reactor parameters that govern the unprotected reactor response and then using these values to predict upset response. The values are determined using the maximum likelihood method. If the predicted reactor response is within safe limits, then one concludes that the intrinsic safety features are operating correctly
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Stochastic Maximum Likelihood (SML parametric estimation of overlapped Doppler echoes
Directory of Open Access Journals (Sweden)
E. Boyer
2004-11-01
Full Text Available This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC in Arecibo, Puerto Rico, in 1998.
Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes
Boyer, E.; Petitdidier, M.; Larzabal, P.
2004-11-01
This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.
The Multi-Mission Maximum Likelihood framework (3ML)
Vianello, Giacomo; Younk, Patrick; Tibaldo, Luigi; Burgess, James M; Ayala, Hugo; Harding, Patrick; Hui, Michelle; Omodei, Nicola; Zhou, Hao
2015-01-01
Astrophysical sources are now observed by many different instruments at different wavelengths, from radio to high-energy gamma-rays, with an unprecedented quality. Putting all these data together to form a coherent view, however, is a very difficult task. Each instrument has its own data format, software and analysis procedure, which are difficult to combine. It is for example very challenging to perform a broadband fit of the energy spectrum of the source. The Multi-Mission Maximum Likelihood framework (3ML) aims to solve this issue, providing a common framework which allows for a coherent modeling of sources using all the available data, independent of their origin. At the same time, thanks to its architecture based on plug-ins, 3ML uses the existing official software of each instrument for the corresponding data in a way which is transparent to the user. 3ML is based on the likelihood formalism, in which a model summarizing our knowledge about a particular region of the sky is convolved with the instrument...
Maximum likelihood based classification of electron tomographic data.
Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan
2011-01-01
Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.
Efficient scatter modelling for incorporation in maximum likelihood reconstruction
International Nuclear Information System (INIS)
Definition of a simplified model of scatter which can be incorporated in maximum likelihood reconstruction for single-photon emission tomography (SPET) continues to be appealing; however, implementation must be efficient for it to be clinically applicable. In this paper an efficient algorithm for scatter estimation is described in which the spatial scatter distribution is implemented as a spatially invariant convolution for points of constant depth in tissue. The scatter estimate is weighted by a space-dependent build-up factor based on the measured attenuation in tissue. Monte Carlo simulation of a realistic thorax phantom was used to validate this approach. Further efficiency was introduced by estimating scatter once after a small number of iterations using the ordered subsets expectation maximisation (OSEM) reconstruction algorithm. The scatter estimate was incorporated as a constant term in subsequent iterations rather than modifying the scatter estimate each iteration. Monte Carlo simulation was used to demonstrate that the scatter estimate does not change significantly provided at least two iterations OSEM reconstruction, subset size 8, is used. Complete scatter-corrected reconstruction of 64 projections of 40 x 128 pixels was achieved in 38 min using a Sun Sparc20 computer. (orig.)
tmle : An R Package for Targeted Maximum Likelihood Estimation
Directory of Open Access Journals (Sweden)
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
A maximum likelihood approach to the destriping technique
Keihanen, E; Poutanen, T; Maino, D; Burigana, C
2003-01-01
The destriping technique is a viable tool for removing different kinds of systematic effects in CMB related experiments. It has already been proven to work for gain instabilities that produce the so-called 1/f noise and periodic fluctuations due to e.g. thermal instability. Both effects when coupled with the observing strategy result in stripes on the observed sky region. Here we present a maximum-likelihood approach to this type of technique and provide also a useful generalization. As a working case we consider a data set similar to what the Planck satellite will produce in its Low Frequency Instrument (LFI). We compare our method to those presented in the literature and find some improvement in performance. Our approach is also more general and allows for different base functions to be used when fitting the systematic effect under consideration. We study the effect of increasing the number of these base functions on the quality of signal cleaning and reconstruction. This study is related to Planck LFI acti...
A Maximum Likelihood Approach to Least Absolute Deviation Regression
Directory of Open Access Journals (Sweden)
Yinbo Li
2004-09-01
Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.
Maximum likelihood estimation for cytogenetic dose-response curves
International Nuclear Information System (INIS)
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d2], where t is the time and d is dose. The coefficient of the d2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure
Energy Technology Data Exchange (ETDEWEB)
Croce, R P [Wavesgroup, University of Sannio at Benevento (Italy); Demma, Th [Wavesgroup, University of Sannio at Benevento (Italy); Longo, M [D.I.3 E., University of Salerno (Italy); Marano, S [D.I.3 E., University of Salerno (Italy); Matta, V [D.I.3 E., University of Salerno (Italy); Pierro, V [Wavesgroup, University of Sannio at Benevento (Italy); Pinto, I M [Wavesgroup, University of Sannio at Benevento (Italy)
2003-09-07
The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality.
Pascazio, Vito; Schirinzi, Gilda
2002-01-01
In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities. PMID:18249716
Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment
Directory of Open Access Journals (Sweden)
Sesay Abu B
2004-01-01
Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Maximum likelihood resampling of noisy, spatially correlated data
Goff, J.; Jenkins, C.
2005-12-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Directory of Open Access Journals (Sweden)
Steven Kelly
Full Text Available The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
Jumper, John M; Sosnick, Tobin R
2016-01-01
To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a new methodology that computes a self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom, the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly; hence, the protein backbone dynamics traverse a greatly smoothed energetic landscape, resulting in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood type method to parameterize the side chain model using...
On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution
Directory of Open Access Journals (Sweden)
Navid Feroze
2015-12-01
Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.
Directory of Open Access Journals (Sweden)
Harlow Timothy J
2005-01-01
Full Text Available Abstract Background Bayesian phylogenetic inference holds promise as an alternative to maximum likelihood, particularly for large molecular-sequence data sets. We have investigated the performance of Bayesian inference with empirical and simulated protein-sequence data under conditions of relative branch-length differences and model violation. Results With empirical protein-sequence data, Bayesian posterior probabilities provide more-generous estimates of subtree reliability than does the nonparametric bootstrap combined with maximum likelihood inference, reaching 100% posterior probability at bootstrap proportions around 80%. With simulated 7-taxon protein-sequence datasets, Bayesian posterior probabilities are somewhat more generous than bootstrap proportions, but do not saturate. Compared with likelihood, Bayesian phylogenetic inference can be as or more robust to relative branch-length differences for datasets of this size, particularly when among-sites rate variation is modeled using a gamma distribution. When the (known correct model was used to infer trees, Bayesian inference recovered the (known correct tree in 100% of instances in which one or two branches were up to 20-fold longer than the others. At ratios more extreme than 20-fold, topological accuracy of reconstruction degraded only slowly when only one branch was of relatively greater length, but more rapidly when there were two such branches. Under an incorrect model of sequence change, inaccurate trees were sometimes observed at less extreme branch-length ratios, and (particularly for trees with single long branches such trees tended to be more inaccurate. The effect of model violation on accuracy of reconstruction for trees with two long branches was more variable, but gamma-corrected Bayesian inference nonetheless yielded more-accurate trees than did either maximum likelihood or uncorrected Bayesian inference across the range of conditions we examined. Assuming an exponential
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Digital Repository Service at National Institute of Oceanography (India)
Hassani, V.; Sorensen, A.J.; Pascoal, A.M.
parameter. The proposed adaptive wave filter borrows from maximum likelihood identification techniques. The general form of the logarithmic likelihood function is derived and the dominant wave frequency (the uncertain parameter) is identified by maximizing...
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi;
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Which quantile is the most informative? Maximum likelihood, maximum entropy and quantile regression
Bera, A. K.; Galvao Jr, A. F.; Montes-Rojas, G.; Park, S. Y.
2010-01-01
This paper studies the connections among quantile regression, the asymmetric Laplace distribution, maximum likelihood and maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of the mean and median. Using the resulting score functions we propose an estimator based on the joint estimating equations. This approach delivers estimates for the slope parameters toge...
A viable method for goodness-of-fit test in maximum likelihood fit
Institute of Scientific and Technical Information of China (English)
ZHANG Feng; GAO Yuan-Ning; HUO Lei
2011-01-01
A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.
Maximum likelihood versus likelihood-free quantum system identification in the atom maser
Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin
2014-10-01
We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.
Directory of Open Access Journals (Sweden)
K. Yao
2007-12-01
Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation CramÃƒÂ©r-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.
Fusion of hyperspectral and lidar data based on dimension reduction and maximum likelihood
Abbasi, B.; Arefi, H.; Bigdeli, B.; Motagh, M.; Roessner, S.
2015-04-01
Limitations and deficiencies of different remote sensing sensors in extraction of different objects caused fusion of data from different sensors to become more widespread for improving classification results. Using a variety of data which are provided from different sensors, increase the spatial and the spectral accuracy. Lidar (Light Detection and Ranging) data fused together with hyperspectral images (HSI) provide rich data for classification of the surface objects. Lidar data representing high quality geometric information plays a key role for segmentation and classification of elevated features such as buildings and trees. On the other hand, hyperspectral data containing high spectral resolution would support high distinction between the objects having different spectral information such as soil, water, and grass. This paper presents a fusion methodology on Lidar and hyperspectral data for improving classification accuracy in urban areas. In first step, we applied feature extraction strategies on each data separately. In this step, texture features based on GLCM (Grey Level Co-occurrence Matrix) from Lidar data and PCA (Principal Component Analysis) and MNF (Minimum Noise Fraction) based dimension reduction methods for HSI are generated. In second step, a Maximum Likelihood (ML) based classification method is applied on each feature spaces. Finally, a fusion method is applied to fuse the results of classification. A co-registered hyperspectral and Lidar data from University of Houston was utilized to examine the result of the proposed method. This data contains nine classes: Building, Tree, Grass, Soil, Water, Road, Parking, Tennis Court and Running Track. Experimental investigation proves the improvement of classification accuracy to 88%.
Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET
Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee
2016-04-01
Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.
Implementation of linear filters for iterative penalized maximum likelihood SPECT reconstruction
International Nuclear Information System (INIS)
This paper reports on six low-pass linear filters applied in frequency space implemented for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The filters implemented were the Shepp-Logan filter, the Butterworth filer, the Gaussian filter, the Hann filter, the Parzen filer, and the Lagrange filter. The low-pass filtering was applied in frequency space to projection data for the initial estimate and to the difference of projection data and reprojected data for higher order approximations. The projection data were acquired experimentally from a chest phantom consisting of non-uniform attenuating media. All the filters could effectively remove the noise and edge artifacts associated with ML approach if the frequency cutoff was properly chosen. The improved performance of the Parzen and Lagrange filters relative to the others was observed. The best image, by viewing its profiles in terms of noise-smoothing, edge-sharpening, and contrast, was the one obtained with the Parzen filter. However, the Lagrange filter has the potential to consider the characteristics of detector response function
Directory of Open Access Journals (Sweden)
Arogyaswami Paulraj
2007-12-01
Full Text Available Maximum-likelihood (ML detection is guaranteed to yield minimum probability of erroneous detection and is thus of great importance for both multiuser detection and space-time decoding. For multiple-input multiple-output (MIMO antenna systems where the number of receive antennas is at least the number of signals multiplexed in the spatial domain, ML detection can be done efficiently using sphere decoding. Suboptimal detectors are also well known to have reasonable performance at low complexity. It is, nevertheless, much less understood for obtaining good detection at affordable complexity if there are less receive antennas than transmitted signals (i.e., underdetermined MIMO systems. In this paper, our aim is to develop an effcient detection strategy that can achieve near ML performance for underdetermined MIMO systems. Our method is based on the geometrical understanding that the ML point happens to be a point that is Ã¢Â€ÂœcloseÃ¢Â€Â to the decoding hyperplane in all directions. The fact that such proximity-close points are much less is used to devise a decoding method that promises to greatly reduce the decoding complexity while achieving near ML performance. An average-case complexity analysis based on Gaussian approximation is also given.
Murphy, P. C.; Klein, V.
1984-01-01
Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Directory of Open Access Journals (Sweden)
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement
Energy Technology Data Exchange (ETDEWEB)
Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)
2009-06-01
The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
International Nuclear Information System (INIS)
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Energy Technology Data Exchange (ETDEWEB)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB
Millar, Russell B
2011-01-01
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis
Nonparametric maximum likelihood estimation of probability densities by penalty function methods
Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.
1974-01-01
When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.
Maximum-likelihood scintillation detection for EM-CCD based gamma cameras
International Nuclear Information System (INIS)
Gamma cameras based on charge-coupled devices (CCDs) coupled to continuous scintillation crystals can combine a good detection efficiency with high spatial resolutions with the aid of advanced scintillation detection algorithms. A previously developed analytical multi-scale algorithm (MSA) models the depth-dependent light distribution but does not take statistics into account. Here we present and validate a novel statistical maximum-likelihood algorithm (MLA) that combines a realistic light distribution model with an experimentally validated statistical model. The MLA was tested for an electron multiplying CCD optically coupled to CsI(Tl) scintillators of different thicknesses. For 99mTc imaging, the spatial resolution (for perpendicular and oblique incidence), energy resolution and signal-to-background counts ratio (SBR) obtained with the MLA were compared with those of the MSA. Compared to the MSA, the MLA improves the energy resolution by more than a factor of 1.6 and the SBR is enhanced by more than a factor of 1.3. For oblique incidence (approximately 450), the depth-of-interaction corrected spatial resolution is improved by a factor of at least 1.1, while for perpendicular incidence the MLA resolution does not consistently differ significantly from the MSA result for all tested scintillator thicknesses. For the thickest scintillator (3 mm, interaction probability 66% at 141 keV) a spatial resolution (perpendicular incidence) of 147 μm full width at half maximum (FWHM) was obtained with an energy resolution of 35.2% FWHM. These results of the MLA were achieved without prior calibration of scintillations as is needed for many statistical scintillation detection algorithms. We conclude that the MLA significantly improves the gamma camera performance compared to the MSA.
Inter-bit prediction based on maximum likelihood estimate for distributed video coding
Klepko, Robert; Wang, Demin; Huchet, Grégory
2010-01-01
Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.
De Bernardi, Elisabetta; Faggiano, Elena; Zito, Felicia; Gerundini, Paolo; Baselli, Giuseppe
2009-07-01
A maximum likelihood (ML) partial volume effect correction (PVEC) strategy for the quantification of uptake and volume of oncological lesions in 18F-FDG positron emission tomography is proposed. The algorithm is based on the application of ML reconstruction on volumetric regional basis functions initially defined on a smooth standard clinical image and iteratively updated in terms of their activity and volume. The volume of interest (VOI) containing a previously detected region is segmented by a k-means algorithm in three regions: A central region surrounded by a partial volume region and a spill-out region. All volume outside the VOI (background with all other structures) is handled as a unique basis function and therefore "frozen" in the reconstruction process except for a gain coefficient. The coefficients of the regional basis functions are iteratively estimated with an attenuation-weighted ordered subset expectation maximization (AWOSEM) algorithm in which a 3D, anisotropic, space variant model of point spread function (PSF) is included for resolution recovery. The reconstruction-segmentation process is iterated until convergence; at each iteration, segmentation is performed on the reconstructed image blurred by the system PSF in order to update the partial volume and spill-out regions. The developed PVEC strategy was tested on sphere phantom studies with activity contrasts of 7.5 and 4 and compared to a conventional recovery coefficient method. Improved volume and activity estimates were obtained with low computational costs, thanks to blur recovery and to a better local approximation to ML convergence. PMID:19673203
A Fast Algorithm for Maximum Likelihood-based Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom;
2015-01-01
including a maximum likelihood (ML) approach. Unfortunately, the ML estimator has a very high computational complexity, and the more inaccurate, but faster correlation-based estimators are therefore often used instead. In this paper, we propose a fast algorithm for the evaluation of the ML cost function...
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Modified Maximum Likelihood Estimation from Censored Samples in Burr Type X Distribution
Directory of Open Access Journals (Sweden)
R.R.L. Kantam
2015-12-01
Full Text Available The two parameter Burr type X distribution is considered and its scale parameter is estimated from a censored sample using the classical maximum likelihood method. The estimating equations are modified to get simpler and efficient estimators. Two methods of modification are suggested. The small sample efficiencies are presented.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Modeling Human Multimodal Perception and Control Using Genetic Maximum Likelihood Estimation
Zaal, P.M.T.; Pool, D.M.; Chu, Q.P.; Van Paassen, M.M.; Mulder, M.; Mulder, J.A.
2009-01-01
This paper presents a new method for estimating the parameters of multi-channel pilot models that is based on maximum likelihood estimation. To cope with the inherent nonlinearity of this optimization problem, the gradient-based Gauss-Newton algorithm commonly used to optimize the likelihood functio
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas Harder; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...
DEFF Research Database (Denmark)
Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk;
2014-01-01
We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR...
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning.
Wang, Hui; Rose, Sherri; van der Laan, Mark J
2011-07-01
Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach.
MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters
Directory of Open Access Journals (Sweden)
Sugaya Yuki
2012-08-01
Full Text Available Abstract Background Linkage analysis is a useful tool for detecting genetic variants that regulate a trait of interest, especially genes associated with a given disease. Although penetrance parameters play an important role in determining gene location, they are assigned arbitrary values according to the researcher’s intuition or as estimated by the maximum likelihood principle. Several methods exist by which to evaluate the maximum likelihood estimates of penetrance, although not all of these are supported by software packages and some are biased by marker genotype information, even when disease development is due solely to the genotype of a single allele. Findings Programs for exploring the maximum likelihood estimates of penetrance parameters were developed using the R statistical programming language supplemented by external C functions. The software returns a vector of polynomial coefficients of penetrance parameters, representing the likelihood of pedigree data. From the likelihood polynomial supplied by the proposed method, the likelihood value and its gradient can be precisely computed. To reduce the effect of the supplied dataset on the likelihood function, feasible parameter constraints can be introduced into maximum likelihood estimates, thus enabling flexible exploration of the penetrance estimates. An auxiliary program generates a perspective plot allowing visual validation of the model’s convergence. The functions are collectively available as the MLEP R package. Conclusions Linkage analysis using penetrance parameters estimated by the MLEP package enables feasible localization of a disease locus. This is shown through a simulation study and by demonstrating how the package is used to explore maximum likelihood estimates. Although the input dataset tends to bias the likelihood estimates, the method yields accurate results superior to the analysis using intuitive penetrance values for disease with low allele frequencies. MLEP is
Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation
Directory of Open Access Journals (Sweden)
Alejandro C. Frery
2004-12-01
Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the Ã°ÂÂ’Â¢0 law. This paper deals with amplitude data, so the Ã°ÂÂ’Â¢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the Ã°ÂÂ’Â¢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.
International Nuclear Information System (INIS)
In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.
Machine learning approximation techniques using dual trees
Ergashbaev, Denis
2015-01-01
This master thesis explores a dual-tree framework as applied to a particular class of machine learning problems that are collectively referred to as generalized n-body problems. It builds a new algorithm on top of it and improves existing Boosted OGE classifier.
Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks
Lin, Lin; Yang, Chengfeng; Ma, Maode
2015-01-01
Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
International Nuclear Information System (INIS)
We present a new spectrum unfolding code, the Maximum Entropy and Maximum Likelihood Unfolding Code (MEALU), based on the maximum likelihood method combined with the maximum entropy method, which can determine a neutron spectrum without requiring an initial guess spectrum. The Normal or Poisson distributions can be used for the statistical distribution. MEALU can treat full covariance data for a measured detector response and response function. The algorithm was verified through an analysis of mock-up data and its performance was checked by applying it to measured data. The results for measured data from the Joyo experimental fast reactor were also compared with those obtained by the conventional J-log method for neutron spectrum adjustment. It was found that MEALU has potential advantages over conventional methods with regard to preparation of a priori information and uncertainty estimation. (author)
Combined simplified maximum likelihood and sphere decoding algorithm for MIMO system
Institute of Scientific and Technical Information of China (English)
ZHANG Lei; YUAN Ting-ting; ZHANG Xin; YANG Da-cheng
2008-01-01
In this article, a new system model for sphere decoding (SD) algorithm is introduced. For the multiple- input multiple-out (MIMO) system, a simplified maximum likelihood (SML) decoding algorithm is proposed based on the new model. The SML algorithm achieves optimal maximum likelihood (ML) performance, and drastically reduces the complexity as compared to the conventional SD algorithm. The improved algorithm is presented by combining the sphere decoding algorithm based on Schnorr-Euchner strategy (SE-SD) with the SML algorithm when the number of transmit antennas exceeds 2. Compared to conventional SD, the proposed algorithm has low complexity especially at low signal to noise ratio (SNR). It is shown by simulation that the proposed algorithm has performance very close to conventional SD.
Directory of Open Access Journals (Sweden)
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Directory of Open Access Journals (Sweden)
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Adapted Maximum-Likelihood Gaussian Models for Numerical Optimization with Continuous EDAs
Bosman, Peter; Grahl, J; Thierens, D.
2007-01-01
This article focuses on numerical optimization with continuous Estimation-of-Distribution Algorithms (EDAs). Specifically, the focus is on the use of one of the most common and best understood probability distributions: the normal distribution. We first give an overview of the existing research on this topic. We then point out a source of inefficiency in EDAs that make use of the normal distribution with maximum-likelihood (ML) estimates. Scaling the covariance matrix beyond its ML estimate d...
Determination of linear displacement by envelope detection with maximum likelihood estimation
International Nuclear Information System (INIS)
We demonstrate in this report an envelope detection technique with maximum likelihood estimation in a least square sense for determining displacement. This technique is achieved by sampling the amplitudes of quadrature signals resulted from a heterodyne interferometer so that the resolution of displacement measurement of the order of λ/104 is experimentally verified. A phase unwrapping procedure is also described and experimentally demonstrated and indicates that the unambiguity range of displacement can be measured beyond a single wavelength.
Investigation of spectral statistics of nuclear systems by maximum likelihood estimation method
International Nuclear Information System (INIS)
In this paper, maximum likelihood estimation technique is employed to consider the spectral statistics of nuclear systems in the nearest neighbor spacing distribution framework. With using the available empirical data, the spectral statistics of different sequences are analyzed. The ML-based estimated values propose more regular dynamics and also minimum uncertainties (variations very close to CRLB) in compare to other estimation methods. Also, the efficiencies of considered distribution functions are examined where suggest the least CRLB for Brody distribution.
Maximum likelihood drift estimation for the mixing of two fractional Brownian motions
Mishura, Yuliya
2015-01-01
We construct the maximum likelihood estimator (MLE) of the unknown drift parameter $\\theta\\in \\mathbb{R}$ in the linear model $X_t=\\theta t+\\sigma B^{H_1}(t)+B^{H_2}(t),\\;t\\in[0,T],$ where $B^{H_1}$ and $B^{H_2}$ are two independent fractional Brownian motions with Hurst indices $\\frac12
Bombrun, Lionel; Pascal, Frédéric; Tourneret, Jean-Yves; Berthoumieu, Yannick
2012-01-01
This paper studies the performance of the maximum likelihood estimators (MLE) for the parameters of multivariate generalized Gaussian distributions. When the shape parameter belongs to ]0,1[, we have proved that the scatter matrix MLE exists and is unique up to a scalar factor. After providing some elements about this proof, an estimation algorithm based on a Newton-Raphson recursion is investigated. Some experiments illustrate the convergence speed of this algorithm. The bias and consistency...
Preliminary application of maximum likelihood method in HL-2A Thomson scattering system
International Nuclear Information System (INIS)
Maximum likelihood method to process the data of HL-2A Thomson scattering system is presented. Using mathematical statistics, this method maximizes the possibility of the likeness between the theoretical data and the observed data, so that we could get more accurate result. It has been proved to be applicable in comparison with that of the ratios method, and some of the drawbacks in ratios method do not exist in this new one. (authors)
Asymptotic Properties of Maximum Likelihood Estimates in the Mixed Poisson Model
Lambert, Diane; Tierney, Luke
1984-01-01
This paper considers the asymptotic behavior of the maximum likelihood estimators (mle's) of the probabilities of a mixed Poisson distribution with a nonparametric mixing distribution. The vector of estimated probabilities is shown to converge in probability to the vector of mixed probabilities at rate $n^{1/2-\\varepsilon}$ for any $\\varepsilon > 0$ under a generalized $\\chi^2$ distance function. It is then shown that any finite set of the mle's has the same joint limiting distribution as doe...
Koirala, Krishna H.; Mishra, Ashok K.; Mohanty, Samarendu
2014-01-01
This paper investigates the factors affecting rice production and technical efficiency of rice farmers in Philippines. Particular attention is given to the role of land ownership. We use the 2007-2012 Loop Survey from the Institute of Rice Research Institute (IRRI) and simulated maximum likelihood (SML) approach. Results show that land ownership plays an important role in rice production. In particular, compared to owner operators, farmers who lease land are less productive. Additionally, res...
A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations
Lee, Tai-Sung; Radak, Brian K.; Pabis, Anna; York, Darrin M.
2012-01-01
A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfa...
Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics
Prix, R.; Krishnan, B.
2009-01-01
We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as {\\cal F} -statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('{\\cal B} -statistic') u...
Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code
Adel Ahmadi; Siamak Talebi
2015-01-01
Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML) detection approach for quasi-orthogonal space-time block code (QOSTBC). The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM) constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric in...
Further Simulation Evidence on the Performance of the Poisson Pseudo-Maximum Likelihood Estimator
Santos Silva, Joao; Tenreyro, Silvana
2009-01-01
We extend the simulation results given in Santos-Silva and Tenreyro (2006, ‘The Log of Gravity’, The Review of Economics and Statistics, 88, pp.641-658) by considering data generated as a finite mixture of gamma variates. Data generated in this way can naturally have a large proportion of zeros and is fully compatible with constant elasticity models such as the gravity equation. Our results confirm that the Poisson pseudo maximum likelihood estimator is generally well behaved.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
Institute of Scientific and Technical Information of China (English)
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Robust maximum likelihood estimation for stochastic state space model with observation outliers
AlMutawa, J.
2016-08-01
The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.
Directory of Open Access Journals (Sweden)
Guido W. Grimm
2006-01-01
Full Text Available The multi-copy internal transcribed spacer (ITS region of nuclear ribosomal DNA is widely used to infer phylogenetic relationships among closely related taxa. Here we use maximum likelihood (ML and splits graph analyses to extract phylogenetic information from ~ 600 mostly cloned ITS sequences, representing 81 species and subspecies of Acer, and both species of its sister Dipteronia. Additional analyses compared sequence motifs in Acer and several hundred Anacardiaceae, Burseraceae, Meliaceae, Rutaceae, and Sapindaceae ITS sequences in GenBank. We also assessed the effects of using smaller data sets of consensus sequences with ambiguity coding (accounting for within-species variation instead of the full (partly redundant original sequences. Neighbor-nets and bipartition networks were used to visualize conflict among character state patterns. Species clusters observed in the trees and networks largely agree with morphology-based classifications; of de Jong’s (1994 16 sections, nine are supported in neighbor-net and bipartition networks, and ten by sequence motifs and the ML tree; of his 19 series, 14 are supported in networks, motifs, and the ML tree. Most nodes had higher bootstrap support with matrices of 105 or 40 consensus sequences than with the original matrix. Within-taxon ITS divergence did not differ between diploid and polyploid Acer, and there was little evidence of differentiated parental ITS haplotypes, suggesting that concerted evolution in Acer acts rapidly.
Tree-space statistics and approximations for large-scale analysis of anatomical trees
DEFF Research Database (Denmark)
Feragen, Aasa; Owen, Megan; Petersen, Jens;
2013-01-01
parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than...
Institute of Scientific and Technical Information of China (English)
Kazi Takpaya; Wei Gang
2003-01-01
Blind identification-blind equalization for Finite Impulse Response (FIR) Multiple Input-Multiple Output (MIMO) channels can be reformulated as the problem of blind sources separation. It has been shown that blind identification via decorrelating sub-channels method could recover the input sources. The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators, which decorrelate the output signals of subchannels, and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix. In this paper, a new approximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed. The proposed method outperforms BIDS in the presence of additive white Gaussian noise.
Tree-fold loop approximation of AMD
Energy Technology Data Exchange (ETDEWEB)
Ono, Akira [Tohoku Univ., Sendai (Japan). Faculty of Science
1997-05-01
AMD (antisymmetrized molecular dynamics) is a frame work for describing a wave function of nucleon multi-body system by Slater determinant of Gaussian wave flux, and a theory for integrally describing a wide range of nuclear reactions such as intermittent energy heavy ion reaction, nucleon incident reaction and so forth. The aim of this study is induction on approximation equation of expected value, {nu}, in correlation capable of calculation with time proportional A (exp 3) (or lower), and to make AMD applicable to the heavier system such as Au+Au. As it must be avoided to break characteristics of AMD, it needs not to be anxious only by approximating the {nu}-value. However, in order to give this approximation any meaning, error of this approximation will have to be sufficiently small in comparison with bond energy of atomic nucleus and smaller than 1 MeV/nucleon. As the absolute expected value in correlation may be larger than 50 MeV/nucleon, the approximation is required to have a high accuracy within 2 percent. (G.K.)
International Nuclear Information System (INIS)
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of Rfree or may leave it out completely
Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids
DEFF Research Database (Denmark)
Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt;
2014-01-01
We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...
Maximum likelihood difference scaling of image quality in compression-degraded images.
Charrier, Christophe; Maloney, Laurence T; Cherifi, Hocine; Knoblauch, Kenneth
2007-11-01
Lossy image compression techniques allow arbitrarily high compression rates but at the price of poor image quality. We applied maximum likelihood difference scaling to evaluate image quality of nine images, each compressed via vector quantization to ten different levels, within two different color spaces, RGB and CIE 1976 L*a*b*. In L*a*b* space, images could be compressed on average by 32% more than in RGB space, with little additional loss in quality. Further compression led to marked perceptual changes. Our approach permits a rapid, direct measurement of the consequences of image compression for human observers.
Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory
Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O
2014-01-01
We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.
Community detection in networks: Modularity optimization and maximum likelihood are equivalent
Newman, M E J
2016-01-01
We demonstrate an exact equivalence between two widely used methods of community detection in networks, the method of modularity maximization in its generalized form which incorporates a resolution parameter controlling the size of the communities discovered, and the method of maximum likelihood applied to the special case of the stochastic block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.
International Nuclear Information System (INIS)
Study of the maximum likelihood by EM algorithm (ML) with a reconstruction kernel equal to the intrinsic detector resolution and sieve regularization has demonstrated that any image improvements over filtered backprojection (FBP) are a function of image resolution. Comparing different reconstruction algorithms potentially requires measuring and matching the image resolution. Since there are no standard methods for describing the resolution of images from a nonlinear algorithm such as ML, the authors have defined measures of effective local Gaussian resolution (ELGR) and effective global Gaussian resolution (EGGR) and examined their behaviour in FBP images and in ML images using two different measurement techniques. (Author)
Directory of Open Access Journals (Sweden)
Lester L. Yuan
2007-06-01
Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.
Directory of Open Access Journals (Sweden)
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Singh, Harpreet; Arvind; Dorai, Kavita
2016-09-01
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.
Recent developments in maximum likelihood estimation of MTMM models for categorical data
Directory of Open Access Journals (Sweden)
Minjeong eJeon
2014-04-01
Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.
Maximum-Likelihood Calibration of an X-ray Computed Tomography System
Moore, Jared W.; Van Holen, Roel; Barrett, Harrison H.; Furenlid, Lars R.
2010-01-01
We present a maximum-likelihood (ML) method for calibrating the geometrical parameters of an x-ray computed tomography (CT) system. This method makes use of the full image data and not a reduced set of data. This algorithm is particularly useful for CT systems that change their geometry during the CT acquisition, such as an adaptive CT scan. Our ML search method uses a contracting-grid algorithm that does not require initial starting values to perform its estimate, thus avoiding problems asso...
A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.
Energy Technology Data Exchange (ETDEWEB)
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,
2014-09-01
In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.
Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.
Maximum likelihood estimation based on type-i hybrid progressive censored competing risks data
Directory of Open Access Journals (Sweden)
Samir Ashour
2016-03-01
Full Text Available This paper is concerned with the estimators problems of the generalized Weibull distribution based on Type-I hybrid progressive censoring scheme (Type-I PHCS in the presence of competing risks when the cause of failure of each item is known. Maximum likelihood estimates and the corresponding Fisher information matrix are obtained. We generalized Kundu and Joarder [7] results in the case of the exponential distribution while, the corresponding results in the case of the generalized exponential and Weibull distributions may be obtained as a special cases. A real data set is used to illustrate the theoretical results.
IM3SHAPE: A maximum-likelihood galaxy shear measurement code for cosmic gravitational lensing
Zuntz, Joe; Voigt, Lisa; Hirsch, Michael; Rowe, Barnaby; Bridle, Sarah
2013-01-01
We present and describe im3shape, a new publicly available galaxy shape measurement code for weak gravitational lensing shear. im3shape performs a maximum likelihood fit of a bulge-plus-disc galaxy model to noisy images, incorporating an applied point spread function. We detail challenges faced and choices made in its design and implementation, and then discuss various limitations that affect this and other maximum likelihood methods. We assess the bias arising from fitting an incorrect galaxy model using simple noise-free images and find that it should not be a concern for current cosmic shear surveys. We test im3shape on the GREAT08 Challenge image simulations, and meet the requirements for upcoming cosmic shear surveys in the case that the simulations are encompassed by the fitted model, using a simple correction for image noise bias. For the fiducial branch of GREAT08 we obtain a negligible additive shear bias and sub-two percent level multiplicative bias, which is suitable for analysis of current surveys...
Maximum-Likelihood Semiblind Equalization of Doubly Selective Channels Using the EM Algorithm
Directory of Open Access Journals (Sweden)
Gideon Kutz
2010-01-01
Full Text Available Maximum-likelihood semi-blind joint channel estimation and equalization for doubly selective channels and single-carrier systems is proposed. We model the doubly selective channel as an FIR filter where each filter tap is modeled as a linear combination of basis functions. This channel description is then integrated in an iterative scheme based on the expectation-maximization (EM principle that converges to the channel description vector estimation. We discuss the selection of the basis functions and compare various functions sets. To alleviate the problem of convergence to a local maximum, we propose an initialization scheme to the EM iterations based on a small number of pilot symbols. We further derive a pilot positioning scheme targeted to reduce the probability of convergence to a local maximum. Our pilot positioning analysis reveals that for high Doppler rates it is better to spread the pilots evenly throughout the data block (and not to group them even for frequency-selective channels. The resulting equalization algorithm is shown to be superior over previously proposed equalization schemes and to perform in many cases close to the maximum-likelihood equalizer with perfect channel knowledge. Our proposed method is also suitable for coded systems and as a building block for Turbo equalization algorithms.
Directory of Open Access Journals (Sweden)
Lorentz JÄNTSCHI
2009-12-01
Full Text Available Aim: The paper aims to investigate the use of maximum likelihood estimation to infer measurement types with their distribution shape. Material and Methods: A series of twenty-eight sets of observed data (different properties and activities were studied. The following analyses were applied in order to meet the aim of the research: precision, normality (Chi-square, Kolmogorov-Smirnov, and Anderson-Darling tests, the presence of outliers (Grubbs’ test, estimation of the population parameters (maximum likelihood estimation under Laplace, Gauss, and Gauss-Laplace distribution assumptions, and analysis of kurtosis (departure of sample kurtosis from the Laplace, Gauss, and Gauss-Laplace population kurtosis. Results: The mean of most investigated sets was likely to be Gauss-Laplace while the standard deviation of most investigated sets of compound was likely to be Gauss. The MLE analysis allowed making assumptions regarding the type of errors in the investigated sets. Conclusions: The proposed procedure proved to be useful in analyzing the shape of the distribution according to measurement type and generated several assumptions regarding their association.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
International Nuclear Information System (INIS)
The fitting of data by χ2 minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. We have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. We compare this method with a χ2-minimization routine applied to both simulated and real Thomson scattering data. Differences in the returned fits are greater at low signal level (less than ∼10 counts per measurement). The maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers. copyright 1997 American Institute of Physics
Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET
International Nuclear Information System (INIS)
Highlights: ► We study model selection and parameter recovery from single-molecule FRET experiments. ► We examine the maximum likelihood-based analysis of two-color photon trajectories. ► The number of observed photons determines the performance of the method. ► For long trajectories, one can extract mean dwell times that are comparable to inter-photon times. -- Abstract: When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Förster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.
International Nuclear Information System (INIS)
Algorithms that calculate maximum likelihood (ML) and maximum a posteriori solutions using expectation-maximization have been successfully applied to SPECT and PET. These algorithms are appealing because of their solid theoretical basis and their guaranteed convergence. A major drawback is the slow convergence, which results in long processing times. This paper presents two new heuristic acceleration methods for maximum likelihood reconstruction of ECT images. The first method incorporates a frequency-dependent amplification in the calculations, to compensate for the low pass filtering of the back projection operation. In the second method, an amplification factor is incorporated that suppresses the effect of attenuation on the updating factors. Both methods are compared to the one-dimensional line search method proposed by Lewitt. All three methods accelerate the ML algorithm. On the test images, Lewitt's method produced the strongest acceleration of the three individual methods. However, the combination of the frequency amplification with the line search method results in a new algorithm with still better performance. Under certain conditions, an effective frequency amplification can be already achieved by skipping some of the calculations required for ML
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Directory of Open Access Journals (Sweden)
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parame-ters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Generalized Maximum likelihood Algorithm for Direction-of-Arrival Estimation of Coherent Sources
Institute of Scientific and Technical Information of China (English)
WANG Bu-hong; WANG Yong-liang; CHEN Hui; GUO Ying
2006-01-01
The generalized maximum likelihood (GML)algorithm for direction-of-arrival estimation is proposed.Firstly,a new data model is established based on generalized steering vectors and generalized army manifold matrix.The GML algorithm is then formulated in detail.It is flexible in the sense that the arriving sources may be a mixture of multiclusters of coherent sources,the array geometry is unrestricted,and the number of sources resolved can be larger than the number of sensors.Secondly,the comparison between the GML algorithm and the conventional deterministic maximum likelihood (DML) algorithm is presented based on their respective geometrical interpretation.Subsequently,the estimation consistency of GML is proved,and the estimation variance of GML is derived.It is concluded that the performance of the GML algorithm coincides with that of the DML algorithm in the incoherent sources' case,while it improves greatly in the coherent source case.By using genetic algorithm,GML is realized,and the simulation results illustrate its improved performance compared with DML,especially in the case of multiclusters of coherent sources.
A maximum likelihood approach to estimating articulator positions from speech acoustics
Energy Technology Data Exchange (ETDEWEB)
Hogden, J.
1996-09-23
This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.
Directory of Open Access Journals (Sweden)
Wang Huai-Chun
2009-09-01
Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.
Loveday, J; Baldry, I K; Bland-Hawthorn, J; Brough, S; Brown, M J I; Driver, S P; Kelvin, L S; Phillipps, S
2015-01-01
We describe modifications to the joint stepwise maximum likelihood method of Cole (2011) in order to simultaneously fit the GAMA-II galaxy luminosity function (LF), corrected for radial density variations, and its evolution with redshift. The whole sample is reasonably well-fit with luminosity (Qe) and density (Pe) evolution parameters Qe, Pe = 1.0, 1.0 but with significant degeneracies characterized by Qe = 1.4 - 0.4Pe. Blue galaxies exhibit larger luminosity density evolution than red galaxies, as expected. We present the evolution-corrected r-band LF for the whole sample and for blue and red sub-samples, using both Petrosian and Sersic magnitudes. Petrosian magnitudes miss a substantial fraction of the flux of de Vaucouleurs profile galaxies: the Sersic LF is substantially higher than the Petrosian LF at the bright end.
A New Maximum-Likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales
Ahlers, Markus; Desiati, Paolo; Díaz-Vélez, Juan Carlos; Fiorino, Daniel W; Westerhoff, Stefan
2016-01-01
The arrival directions of TeV-PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.
A CODEBOOK COMPENSATIVE VOICE MORPHING ALGORITHM BASED ON MAXIMUM LIKELIHOOD ESTIMATION
Institute of Scientific and Technical Information of China (English)
Xu Ning; Yang Zhen; Zhang Linhua
2009-01-01
This paper presents an improved voice morphing algorithm based on Gaussian Mixture Model (GMM) which overcomes the traditional one in the terms of overly smoothed problems of the converted spectral and discontinuities between frames.Firstly,a maximum likelihood estimation for the model is introduced for the alleviation of the inversion of high dimension matrixes caused by traditional conversion function.Then,in order to resolve the two problems associated with the baseline,a codebook compensation technique and a time domain medial filter are applied.The results of listening evaluations show that the quality of the speech converted by the proposed method is significantly better than that by the traditional GMM method,and the Mean Opinion Score (MOS) of the converted speech is improved from 2.5 to 3.1 and ABX score from 38% to 75%.
A maximum likelihood analysis of the CoGeNT public dataset
Kelso, Chris
2016-06-01
The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.
Using maximum likelihood method to detect adaptive evolution of HCV envelope protein-coding genes
Institute of Scientific and Technical Information of China (English)
ZHANG Wenjuan; ZHANG Yuan; ZHONG Yang
2006-01-01
Nonsynonymous-synonymous substitution rate ratio (dN/dS) is an important measure for evaluating selective pressure based on the protein-coding sequences. Maximum likelihood (ML) method with codon-substitution models is a powerful statistic tool for detecting amino acid sites under positive selection and adaptive evolution. We analyzed the hepatitis C virus (HCV) envelope protein-coding sequences from 18 general geno/ subtypes worldwide, and found 4 amino acid sites under positive selection. Since these sites are located in different immune epitopes, it is reasonable to anticipate that our study would have potential values in biomedicine. It also suggests that the ML method is an effective way to detect adaptive evolution in virus proteins with relatively high genetic diversity.
Comparison of the least squares and the maximum likelihood estimators for gamma-spectrometry
International Nuclear Information System (INIS)
A comparison of the characteristics of the maximum likelihood (ML) and the least squares (LS) estimators of nuclides activities for low-intensity scintillation γ-spectra has been carried out by the computer simulation. It has been found that the part of the LS estimators gives biased activity estimates and the bias grows with increase of the multichannel analyzer resolution (the number of the spectrum channels). Such bias in estimates leads to the significant deterioration of the estimation accuracy for low-intensity spectra. Consequently, the threshold of nuclides detection rises up to 2-10 times in comparison with the ML estimator. It has been shown that the ML estimator and the special LS estimator provide non biased estimates of nuclides activities. Thus, these estimators are optimal for practical application to low-intensity spectrometry. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)
International Nuclear Information System (INIS)
Interferometers accurately measure the difference between two wavefronts, one from a reference surface and the other from an unknown surface. If the reference surface is near perfect or is accurately known from some other test, then the shape of the unknown surface can be determined. We investigate the case where neither the reference surface nor the surface under test is well known. By making multiple shear measurements where both surfaces are translated and/or rotated, we obtain sufficient information to reconstruct the figure of both surfaces with a maximum likelihood reconstruction method. The method is demonstrated for the measurement of a 1.6 m flat mirror to 2 nm rms, using a smaller reference mirror that had significant figure error.
Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics
International Nuclear Information System (INIS)
We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.
Noise propagation in SPECT images reconstructed using an iterative maximum-likelihood algorithm
International Nuclear Information System (INIS)
The effects of photon noise in the emission projection data and uncertainty in the attenuation map on the image noise in attenuation-corrected SPECT images reconstructed using a maximum-likelihood expectation-maximization algorithm were investigated. Emission projection data of a physical Hoffman brain phantom and a thorax-like phantom were acquired from a prototype emission-transmission computed tomography (ETCT) scanner being developed at UCSF (University of California at San Francisco). Computer-simulated emission projection data from a head-like phantom and a thorax-like phantom were also obtained using a fan-beam geometry consistent with the ETCT system. The results are expected to be generally applicable to other emission-transmission systems, including those using external radionuclide sources for the acquisition of attenuation maps. (author)
Using Maximum Likelihood analysis in HBT interferometry: bin-free treatment of correlated errors
International Nuclear Information System (INIS)
We present a new procedure, based on the Maximum Likelihood Method, for fitting the space-time size parameters of the particle production region in ultra-relativistic heavy ion collisions. This procedure offers two significant advantages: 1) it does not require sorting of the correlation data into arbitrary bins in the multidimensional momentum space and 2) it applies all available information on the experimental resolution error matrix separately to each correlated particle multiplet analyzed. These features permit extraction of maximum information from the data. The technique may be particularly important in ultra-relativistic heavy ion collisions, because in this energy domain large source radii and long source lifetimes are expected, and high-multiplicity HBT interferometry with a single collision event is a possibility. ((orig.))
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Directory of Open Access Journals (Sweden)
Morelli Michele
2007-01-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Directory of Open Access Journals (Sweden)
Michele Morelli
2007-06-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Bian, Liheng; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-01-01
Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for error removal. Results on both simulated data and real data captured using our laser FPM setup show that the proposed...
Blind deconvolution of quantum-limited incoherent imagery: maximum-likelihood approach.
Holmes, T J
1992-07-01
Previous research presented by the author and others into maximum-likelihood image restoration for incoherent imagery is extended to consider problems of blind deconvolution in which the impulse response of the system is assumed to be unknown. Potential applications that motivate this study are wide-field and confocal fluorescence microscopy, although applications in astronomy and infrared imaging are foreseen as well. The methodology incorporates the iterative expectation-maximization algorithm. Although the precise impulse response is assumed to be unknown, some prior knowledge about characteristics of the impulse response is used. In preliminary simulation studies that are presented, the circular symmetry and the band-limited nature of the impulse response are used as such. These simulations demonstrate the potential utility and present limitations of these methods. PMID:1634965
A New Maximum-likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales
Ahlers, M.; BenZvi, S. Y.; Desiati, P.; Díaz–Vélez, J. C.; Fiorino, D. W.; Westerhoff, S.
2016-05-01
The arrival directions of TeV–PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic-ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic-ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic-ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.
Consistency of the Maximum Likelihood Estimator for general hidden Markov models
Douc, Randal; Olsson, Jimmy; Van Handel, Ramon
2009-01-01
Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the maximum likelihood estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for $V$-uniformly ergodic Markov chains.
Selva, J
2011-01-01
This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.
Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila
DEFF Research Database (Denmark)
Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J;
2007-01-01
selection coefficient for optimal codon usage (S), allowing joint maximum likelihood estimation of S and the dN/dS ratio. We apply the method to previously published data from Drosophila melanogaster, Drosophila simulans, and Drosophila yakuba and show, in accordance with previous results, that the D....... melanogaster lineage has experienced a reduction in the selection for optimal codon usage. However, the D. melanogaster lineage has also experienced a change in the biological mutation rates relative to D. simulans, in particular, a relative reduction in the mutation rate from A to G and an increase in the...... mutation rate from C to T. However, neither a reduction in the strength of selection nor a change in the mutational pattern can alone explain all of the data observed in the D. melanogaster lineage. For example, we also confirm previous results showing that the Notch locus has experienced positive...
Implementation of non-linear filters for iterative penalized maximum likelihood image reconstruction
International Nuclear Information System (INIS)
In this paper, the authors report on the implementation of six edge-preserving, noise-smoothing, non-linear filters applied in image space for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The non-linear smoothing filters implemented were the median filter, the E6 filter, the sigma filter, the edge-line filter, the gradient-inverse filter, and the 3-point edge filter with gradient-inverse filter, and the 3-point edge filter with gradient-inverse weight. A 3 x 3 window was used for all these filters. The best image obtained, by viewing the profiles through the image in terms of noise-smoothing, edge-sharpening, and contrast, was the one smoothed with the 3-point edge filter. The computation time for the smoothing was less than 1% of one iteration, and the memory space for the smoothing was negligible. These images were compared with the results obtained using Bayesian analysis
The early maximum likelihood estimation model of audiovisual integration in speech perception
DEFF Research Database (Denmark)
Andersen, Tobias
2015-01-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...
Institute of Scientific and Technical Information of China (English)
LIAO Yuanfu; ZHUANG Zhixian; YANG Jyhher
2008-01-01
Unseen handset mismatch is the major source of performance degradation in speaker identifica-tion in telecommunication environments.To alleviate the problem,a maximum likelihood a priori knowledge interpolation (ML-AKI)-based handset mismatch compensation approach is proposed.It first collects a set of handset characteristics of seen handsets to use as the a priori knowledge for representing the space of handsets.During evaluation the characteristics of an unknown test handset are optimally estimated by in-terpolation from the set of the a pdod knowledge.Experimental results on the HTIMIT database show that the ML-AKI method can improve the average speaker identification rate from 60.0% to 74.6% as compared with conventional maximum a posteriori-adapted Gaussian mixture models.The proposed ML-AKI method is a promising method for robust speaker identification.
Meyer, Karin
2007-11-01
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html). PMID:17973343
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.
2010-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
da Silva, A J; Santos, D O C; Lima, R F
2013-01-01
Recently, we demonstrated the existence of nonextensivity in neuromuscular transmission [Phys. Rev. E 84, 041925 (2011)]. In the present letter, we propose a general criterion based on the q-calculus foundations and nonextensive statistics to estimate the values for both scale factor and q-index using the maximum likelihood q-estimation method (MLqE). We next applied our theoretical findings to electrophysiological recordings from neuromuscular junction (NMJ) where spontaneous miniature end plate potentials (MEPP) were analyzed. These calculations were performed in both normal and high extracellular potassium concentration, [K+]o. This protocol was assumed to test the validity of the q-index in electrophysiological conditions closely resembling physiological stimuli. Surprisingly, the analysis showed a significant difference between the q-index in high and normal [K+]o, where the magnitude of nonextensivity was increased. Our letter provides a general way to obtain the best q-index from the q-Gaussian distrib...
Yan, Tsun-Yee
1992-01-01
This paper describes an extended-source spatial acquisition process based on the maximum likelihood criterion for interplanetary optical communications. The objective is to use the sun-lit Earth image as a receiver beacon and point the transmitter laser to the Earth-based receiver to establish a communication path. The process assumes the existence of a reference image. The uncertainties between the reference image and the received image are modeled as additive white Gaussian disturbances. It has been shown that the optimal spatial acquisition requires solving two nonlinear equations to estimate the coordinates of the transceiver from the received camera image in the transformed domain. The optimal solution can be obtained iteratively by solving two linear equations. Numerical results using a sample sun-lit Earth as a reference image demonstrate that sub-pixel resolutions can be achieved in a high disturbance environment. Spatial resolution is quantified by Cramer-Rao lower bounds.
PSMIX: an R package for population structure inference via maximum likelihood method
Directory of Open Access Journals (Sweden)
Zhao Hongyu
2006-06-01
Full Text Available Abstract Background Inference of population stratification and individual admixture from genetic markers is an integrative part of a study in diverse situations, such as association mapping and evolutionary studies. Bayesian methods have been proposed for population stratification and admixture inference using multilocus genotypes and widely used in practice. However, these Bayesian methods demand intensive computation resources and may run into convergence problem in Markov Chain Monte Carlo based posterior samplings. Results We have developed PSMIX, an R package based on maximum likelihood method using expectation-maximization algorithm, for inference of population stratification and individual admixture. Conclusion Compared with software based on Bayesian methods (e.g., STRUCTURE, PSMIX has similar accuracy, but more efficient computations. PSMIX and its supplemental documents are freely available at http://bioinformatics.med.yale.edu/PSMIX.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper addresses the problems of parameter estimation of multivariable stationary stochastic systems on the basis of observed output data. The main contribution is to employ the expectation-maximisation (EM) method as a means for computation of the maximum-likelihood (ML) parameter estimation of the system. Closed form of the expectation of the studied system subjected to Gaussian distribution noise is derived and paraneter choice that maximizes the expectation is also proposed. This results in an iterative algorithm for parameter estimation and the robust algorithm implementation based on technique of QR-factorization and Cholesky factorization is also discussed. Moreover, algorithmic properties such as non-decreasing likelihood value, necessary and sufficient conditions for the algorithm to arrive at a local stationary parameter, the convergence rate and the factors affecting the convergence rate are analyzed. Simulation study shows that the proposed algorithm has attractive properties such as numerical stability, and avoidance of difficult initial conditions.
MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker
Energy Technology Data Exchange (ETDEWEB)
Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw
2009-06-09
MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.
Supervised maximum-likelihood weighting of composite protein networks for complex prediction
Directory of Open Access Journals (Sweden)
Yong Chern Han
2012-12-01
Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to
Directory of Open Access Journals (Sweden)
César da Silva Chagas
2013-04-01
Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.
Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.
2016-03-01
Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.
International Nuclear Information System (INIS)
This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab
Energy Technology Data Exchange (ETDEWEB)
Llacer, J.; Veklerov, E.; Nolan, D. (Lawrence Berkeley Lab., CA (USA)); Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J. (California Univ., Los Angeles, CA (USA))
1990-10-01
This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of {sup 18}F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab.
THE GENERALIZED MAXIMUM LIKELIHOOD METHOD APPLIED TO HIGH PRESSURE PHASE EQUILIBRIUM
Directory of Open Access Journals (Sweden)
Lúcio CARDOZO-FILHO
1997-12-01
Full Text Available The generalized maximum likelihood method was used to determine binary interaction parameters between carbon dioxide and components of orange essential oil. Vapor-liquid equilibrium was modeled with Peng-Robinson and Soave-Redlich-Kwong equations, using a methodology proposed in 1979 by Asselineau, Bogdanic and Vidal. Experimental vapor-liquid equilibrium data on binary mixtures formed with carbon dioxide and compounds usually found in orange essential oil were used to test the model. These systems were chosen to demonstrate that the maximum likelihood method produces binary interaction parameters for cubic equations of state capable of satisfactorily describing phase equilibrium, even for a binary such as ethanol/CO2. Results corroborate that the Peng-Robinson, as well as the Soave-Redlich-Kwong, equation can be used to describe phase equilibrium for the following systems: components of essential oil of orange/CO2.Foi empregado o método da máxima verossimilhança generalizado para determinação de parâmetros de interação binária entre os componentes do óleo essencial de laranja e dióxido de carbono. Foram usados dados experimentais de equilíbrio líquido-vapor de misturas binárias de dióxido de carbono e componentes do óleo essencial de laranja. O equilíbrio líquido-vapor foi modelado com as equações de Peng-Robinson e de Soave-Redlich-Kwong usando a metodologia proposta em 1979 por Asselineau, Bogdanic e Vidal. A escolha destes sistemas teve como objetivo demonstrar que o método da máxima verosimilhança produz parâmetros de interação binária, para equações cúbicas de estado capazes de descrever satisfatoriamente até mesmo o equilíbrio para o binário etanol/CO2. Os resultados comprovam que tanto a equação de Peng-Robinson quanto a de Soave-Redlich-Kwong podem ser empregadas para descrever o equilíbrio de fases para o sistemas: componentes do óleo essencial de laranja/CO2.
Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca
2016-04-01
The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed
Saatci, Esra; Akan, Aydin
2010-12-01
We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.
Decision Feedback Partial Response Maximum Likelihood for Super-Resolution Media
Kasahara, Ryosuke; Ogata, Tetsuya; Kawasaki, Toshiyuki; Miura, Hiroshi; Yokoi, Kenya
2007-06-01
A decision feedback partial response maximum likelihood (PRML) for super-resolution media was developed. Decision feedback is used to compensate for nonlinear distortion in the readout signals of super-resolution media, making it possible to compensate for long-bit nonlinear distortion in small circuits. An field programmable gate array (FPGA) was fabricated with a decision feedback PRML, and a real-time bit error rate (bER) measuring system was developed. As a result, a bER of 4× 10-5 was achieved with an actual readout signal at the double density of a Blu-ray disc converted to the optical properties of the experimental setup using a red-laser system. Also, a bER of 1.5× 10-5 was achieved at double the density of an a high definition digital versatile disc read-only memory (HD DVD-ROM), and the radial and tangential tilt margins were measured in a blue-laser system.
Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code
Directory of Open Access Journals (Sweden)
Adel Ahmadi
2015-01-01
Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.
Directory of Open Access Journals (Sweden)
Andrey eStepanyuk
2014-10-01
Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.
Extended Maximum Likelihood Halo-independent Analysis of Dark Matter Direct Detection Data
Gelmini, Graciela B; Gondolo, Paolo; Huh, Ji-Haeng
2015-01-01
We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio $f_n/f_p=-0.7$, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with $f_n/f_p=-0.8$.
Zhao, Xiang; Lin, Jiming
2016-04-01
Image sensor-based visible light positioning can be applied not only to indoor environments but also to outdoor environments. To determine the performance bounds of the positioning accuracy from the view of statistical optimization for an outdoor image sensor-based visible light positioning system, we analyze and derive the maximum likelihood estimation and corresponding Cramér-Rao lower bounds of vehicle position, under the condition that the observation values of the light-emitting diode (LED) imaging points are affected by white Gaussian noise. For typical parameters of an LED traffic light and in-vehicle camera image sensor, simulation results show that accurate estimates are available, with positioning error generally less than 0.1 m at a communication distance of 30 m between the LED array transmitter and the camera receiver. With the communication distance being constant, the positioning accuracy depends on the number of LEDs used, the focal length of the lens, the pixel size, and the frame rate of the camera receiver.
Performance of MIMO-OFDM system using Linear Maximum Likelihood Alamouti Decoder
Directory of Open Access Journals (Sweden)
Monika Aggarwal
2012-06-01
Full Text Available A MIMO-OFDM wireless communication system is a combination of MIMO and OFDM Technology. The combination of MIMO and OFDM produces a powerful technique for providing high data rates over frequency-selective fading channels. MIMO-OFDM system has been currently recognized as one of the most competitive technology for 4G mobile wireless systems. MIMO-OFDM system can compensate for the lacks of MIMO systems and give play to the advantages of OFDM system.In this paper , the bit error rate (BER performance using linear maximum likelihood alamouti combiner (LMLAC decoding technique for space time frequency block codes(STFBC MIMO-OFDM system with frequency offset (FO is being evaluated to provide the system with low complexity and maximum diversity. The simulation results showed that the scheme has the ability to reduce ICI effectively with a low decoding complexity and maximum diversity in terms of bandwidth efficiency and also in the bit error rate (BER performance especially at high signal to noise ratio.
Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies
International Nuclear Information System (INIS)
The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome
International Nuclear Information System (INIS)
In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.
International Nuclear Information System (INIS)
131I is widely used for diagnostic and therapeutic purposes. Since thyroid is the main deposition site for 131I, so it can be detected by direct thyroid monitoring. This work presents results of follow-up measurements of an individual who was internally contaminated with 131I with injected activity being determined by maximum likelihood method. Importance of dose per unit content is also shown in this study. The whole body monitoring system available in Radiation Safety Systems Division of Bhabha Atomic Research Centre is calibrated for estimation of 131I in the thyroid of radiation worker using BOttle MAnnikin Absorber (BOMAB) phantom with neck part being replaced by American National Standards Institute (ANSI)/International Atomic Energy Agency (IAEA) neck. The estimated intake was found to be 89.24 kBq and the committed effective dose is calculated as 1.96 mSv. The data are analyzed with autocorrelation and Chi-square test to establish goodness of fit with log-normal distribution. The overestimation of thyroid activity by use of mid axial hole in BOMAB phantom is removed by using ANSI/IAEA neck phantom. Measured retained thyroidal data on different days following the intake has closely fitted to the ICRP predicted retained activity. (author)
International Nuclear Information System (INIS)
The aim of this work is to calculate, directly from projection data, concise images characterizing the spatial and temporal distribution of labelled compounds from dynamic PET data. Conventionally, image reconstruction and the calculation of parametric images are performed sequentially. By combining the two processes, low-noise parametric images are obtained, using a computationally feasible parametric iterative reconstruction (Pir) algorithm. Pir is performed by restricting the pixel time - activity curves to a positive linear sum of predefined time characteristics. The weights in this sum are then calculated directly from the PET projection data, using an iterative algorithm based on a maximum-likelihood iterative algorithm commonly used for tomographic reconstruction. The ability of the algorithm to extract known kinetic components from the raw data is assessed, using data from both a phantom experiment and clinical studies. The calculated parametric images indicate differential kinetic behaviour and have been used to aid in the identification of tissues which exhibit differences in the handling of labelled compounds. These parametric images should be helpful in defining regions of interest with similar functional behaviour, and with Fag Pakatal analysis. (author)
Directory of Open Access Journals (Sweden)
Sonali Sachin Sankpal
2016-01-01
Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.
Directory of Open Access Journals (Sweden)
Mohammad H. Radfar
2006-11-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Directory of Open Access Journals (Sweden)
Dansereau Richard M
2007-01-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Directory of Open Access Journals (Sweden)
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
McNicholl, Patrick J.; Crabtree, Peter N.
2014-09-01
Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
Rayleigh-maximum-likelihood filtering for speckle reduction of ultrasound images.
Aysal, Tuncer C; Barner, Kenneth E
2007-05-01
Speckle is a multiplicative noise that degrades ultrasound images. Recent advancements in ultrasound instrumentation and portable ultrasound devices necessitate the need for more robust despeckling techniques, for both routine clinical practice and teleconsultation. Methods previously proposed for speckle reduction suffer from two major limitations: 1) noise attenuation is not sufficient, especially in the smooth and background areas; 2) existing methods do not sufficiently preserve or enhance edges--they only inhibit smoothing near edges. In this paper, we propose a novel technique that is capable of reducing the speckle more effectively than previous methods and jointly enhancing the edge information, rather than just inhibiting smoothing. The proposed method utilizes the Rayleigh distribution to model the speckle and adopts the robust maximum-likelihood estimation approach. The resulting estimator is statistically analyzed through first and second moment derivations. A tuning parameter that naturally evolves in the estimation equation is analyzed, and an adaptive method utilizing the instantaneous coefficient of variation is proposed to adjust this parameter. To further tailor performance, a weighted version of the proposed estimator is introduced to exploit varying statistics of input samples. Finally, the proposed method is evaluated and compared to well-accepted methods through simulations utilizing synthetic and real ultrasound data. PMID:17518065
Gianfrancesco, M A; Balzer, L; Taylor, K E; Trupin, L; Nititham, J; Seldin, M F; Singer, A W; Criswell, L A; Barcellos, L F
2016-09-01
Systemic lupus erythematous (SLE) is a chronic autoimmune disease associated with genetic and environmental risk factors. However, the extent to which genetic risk is causally associated with disease activity is unknown. We utilized longitudinal-targeted maximum likelihood estimation to estimate the causal association between a genetic risk score (GRS) comprising 41 established SLE variants and clinically important disease activity as measured by the validated Systemic Lupus Activity Questionnaire (SLAQ) in a multiethnic cohort of 942 individuals with SLE. We did not find evidence of a clinically important SLAQ score difference (>4.0) for individuals with a high GRS compared with those with a low GRS across nine time points after controlling for sex, ancestry, renal status, dialysis, disease duration, treatment, depression, smoking and education, as well as time-dependent confounding of missing visits. Individual single-nucleotide polymorphism (SNP) analyses revealed that 12 of the 41 variants were significantly associated with clinically relevant changes in SLAQ scores across time points eight and nine after controlling for multiple testing. Results based on sophisticated causal modeling of longitudinal data in a large patient cohort suggest that individual SLE risk variants may influence disease activity over time. Our findings also emphasize a role for other biological or environmental factors. PMID:27467283
Application of Artificial Bee Colony Algorithm to Maximum Likelihood DOA Estimation
Institute of Scientific and Technical Information of China (English)
Zhicheng Zhang; Jun Lin; Yaowu Shi
2013-01-01
Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation,but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use.To reduce the high computational burden of ML method and make it more suitable to engineering applications,we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation.As a recently proposed bio-inspired computing algorithm,ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment.It offers an excellent alternative to the conventional methods in ML-DOA estimation.The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence,Signal-to-Noise Ratio (SNR),and number of iterations.The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated.Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.
Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model
International Nuclear Information System (INIS)
We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-motion
Directory of Open Access Journals (Sweden)
Huajun Liu
2016-01-01
Full Text Available This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.
Directory of Open Access Journals (Sweden)
Karbauskaitė Rasa
2015-12-01
Full Text Available One of the problems in the analysis of the set of images of a moving object is to evaluate the degree of freedom of motion and the angle of rotation. Here the intrinsic dimensionality of multidimensional data, characterizing the set of images, can be used. Usually, the image may be represented by a high-dimensional point whose dimensionality depends on the number of pixels in the image. The knowledge of the intrinsic dimensionality of a data set is very useful information in exploratory data analysis, because it is possible to reduce the dimensionality of the data without losing much information. In this paper, the maximum likelihood estimator (MLE of the intrinsic dimensionality is explored experimentally. In contrast to the previous works, the radius of a hypersphere, which covers neighbours of the analysed points, is fixed instead of the number of the nearest neighbours in the MLE. A way of choosing the radius in this method is proposed. We explore which metric—Euclidean or geodesic—must be evaluated in the MLE algorithm in order to get the true estimate of the intrinsic dimensionality. The MLE method is examined using a number of artificial and real (images data sets.
Sheen, D. H.; Seong, Y. J.; Park, J. H.; Lim, I. S.
2015-12-01
From the early of this year, the Korea Meteorological Administration (KMA) began to operate the first stage of an earthquake early warning system (EEWS) and provide early warning information to the general public. The earthquake early warning system (EEWS) in the KMA is based on the Earthquake Alarm Systems version 2 (ElarmS-2), developed at the University of California Berkeley. This method estimates the earthquake location using a simple grid search algorithm that finds the location with the minimum variance of the origin time on successively finer grids. A robust maximum likelihood earthquake location (MAXEL) method for early warning, based on the equal differential times of P arrivals, was recently developed. The MAXEL has been demonstrated to be successful in determining the event location, even when an outlier is included in the small number of P arrivals. This presentation details the application of the MAXEL to the EEWS of the KMA, its performance evaluation over seismic networks in South Korea with synthetic data, and comparison of statistics of earthquake locations based on the ElarmS-2 and the MAXEL.
FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.
Directory of Open Access Journals (Sweden)
Maxim Nikolaievich Shokhirev
Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.
Rizzo, R. E.; Healy, D.; De Siena, L.
2015-12-01
The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.
Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.
Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène
2016-07-01
Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific
Parameter-free bearing fault detection based on maximum likelihood estimation and differentiation
International Nuclear Information System (INIS)
Bearing faults can lead to malfunction and ultimately complete stall of many machines. The conventional high-frequency resonance (HFR) method has been commonly used for bearing fault detection. However, it is often very difficult to obtain and calibrate bandpass filter parameters, i.e. the center frequency and bandwidth, the key to the success of the HFR method. This inevitably undermines the usefulness of the conventional HFR technique. To avoid such difficulties, we propose parameter-free, versatile yet straightforward techniques to detect bearing faults. We focus on two types of measured signals frequently encountered in practice: (1) a mixture of impulsive faulty bearing vibrations and intrinsic background noise and (2) impulsive faulty bearing vibrations blended with intrinsic background noise and vibration interferences. To design a proper signal processing technique for each case, we analyze the effects of intrinsic background noise and vibration interferences on amplitude demodulation. For the first case, a maximum likelihood-based fault detection method is proposed to accommodate the Rician distribution of the amplitude-demodulated signal mixture. For the second case, we first illustrate that the high-amplitude low-frequency vibration interferences can make the amplitude demodulation ineffective. Then we propose a differentiation method to enhance the fault detectability. It is shown that the iterative application of a differentiation step can boost the relative strength of the impulsive faulty bearing signal component with respect to the vibration interferences. This preserves the effectiveness of amplitude demodulation and hence leads to more accurate fault detection. The proposed approaches are evaluated on simulated signals and experimental data acquired from faulty bearings
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
International Nuclear Information System (INIS)
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated
Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.
Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène
2016-07-01
Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific
Stsepankou, D.; Arns, A.; Ng, S. K.; Zygmanski, P.; Hesser, J.
2012-10-01
The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone-beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system.
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-10-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the
International Nuclear Information System (INIS)
Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a
Tanaka, Katsuto
2011-01-01
We discuss some inference problems associated with the fractional Ornstein-Uhlenbeck (fO-U) process driven by the fractional Brownian motion (fBm). In particular, we are concerned with the estimation of the drift parameter, assuming that the Hurst parameter H is known and is in [1/2, 1). Under this setting we compute the distributions of the maximum likelihood estimator (MLE) and the minimum contrast estimator (MCE) for the drift parameter, and explore their distributional properties by payin...
Nezhel'skaya, L. A.
2016-09-01
A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.
A.S. Kalwij
2000-01-01
This paper proposes an alternative estimation procedure for a panel data Tobit model with individual specific effects based on taking first differences of the equation of interest. This helps to alleviate the sensitivity of the estimates to a specific parameterization of the individual specific effects and some Monte Carlo evidence is provided in support of this. To allow for arbitrary serial correlation estimation takes place in two steps: Maximum Likelihood is applied to each pair of consec...
Approximation Algorithms for Optimal Decision Trees and Adaptive TSP Problems
Gupta, Anupam; Nagarajan, Viswanath; Ravi, R
2010-01-01
We consider the problem of constructing optimal decision trees: given a collection of tests which can disambiguate between a set of $m$ possible diseases, each test having a cost, and the a-priori likelihood of the patient having any particular disease, what is a good adaptive strategy to perform these tests to minimize the expected cost to identify the disease? We settle the approximability of this problem by giving a tight $O(\\log m)$-approximation algorithm. We also consider a more substantial generalization, the Adaptive TSP problem. Given an underlying metric space, a random subset $S$ of cities is drawn from a known distribution, but $S$ is initially unknown to us--we get information about whether any city is in $S$ only when we visit the city in question. What is a good adaptive way of visiting all the cities in the random subset $S$ while minimizing the expected distance traveled? For this problem, we give the first poly-logarithmic approximation, and show that this algorithm is best possible unless w...
Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)
Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther
2013-01-01
The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private muta...
Slob W; Hendriksen CFM
1989-01-01
Dit rapport bespreekt de analyse van verdunningsreeksen met maximum likelihood, met als toepassing het in vitro serologisch toetsen van de werkzaamheid van bacteriele vaccins voor de mens. Met computersimulaties wordt aangetoond dat de maximum likelihood methode adequaat is voor de in het werkzaamh
Complexes of block copolymers in solution: tree approximation
Geurts, Bernard J.; Damme, van Ruud
1989-01-01
We determine the statistical properties of block copolymer complexes in solution. These complexes are assumed to have the topological structure of (i) a tree or of (ii) a line-dressed tree. In case the structure is that of a tree, the system is shown to undergo a gelation transition at sufficiently
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Directory of Open Access Journals (Sweden)
L. STRAGEVITCH
1997-03-01
Full Text Available The equations of the method based on the maximum likelihood principle have been rewritten in a suitable generalized form to allow the use of any number of implicit constraints in the determination of model parameters from experimental data and from the associated experimental uncertainties. In addition to the use of any number of constraints, this method also allows data, with different numbers of constraints, to be reduced simultaneously. Application of the method is illustrated in the reduction of liquid-liquid equilibrium data of binary, ternary and quaternary systems simultaneously
DEFF Research Database (Denmark)
Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K
2009-01-01
Background correction is an important preprocessing step for microarray data that attempts to adjust the data for the ambient intensity surrounding each feature. The "normexp" method models the observed pixel intensities as the sum of 2 random variables, one normally distributed and the other...... exponentially distributed, representing background noise and signal, respectively. Using a saddle-point approximation, Ritchie and others (2007) found normexp to be the best background correction method for 2-color microarray data. This article develops the normexp method further by improving the estimation...
Directory of Open Access Journals (Sweden)
S. Koteswara Rao
2010-03-01
Full Text Available Maximum likelihood estimator is a suitable algorithm for passive target tracking applications. Nardone, Lindgren and Gong introduced this approach using batch processing. In this paper, the batch processing is converted into sequential processing for real-time applications like passive target tracking using bearings-only measurements. Adaptively, the variance of each measurement is computed and is used along with the measurement in such a way that the effect of false bearings can be reduced. The transmissions made by radar on a target ship are assumed to be intercepted by an electronic warfare (EW system of own ship. The generated bearings in intercept mode are processed through maximum likelihood estimator (MLE to find out target motion parameters. Instead of assuming some arbitrary values, pseudo linear estimator outputs are used for the initialisation of MLE. The algorithm is tested in Monte-Carlo simulation and its results are presented for two typical scenarios.Defence Science Journal, 2010, 60(2, pp.197-203, DOI:http://dx.doi.org/10.14429/dsj.60.340
Directory of Open Access Journals (Sweden)
James O Lloyd-Smith
Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.
Müller, M. F.; Thompson, S. E.
2015-06-01
We introduce topological restricted maximum likelihood (TopREML) as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross-validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. The ability of TopREML to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable via remote-sensing technology.
International Nuclear Information System (INIS)
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field
Energy Technology Data Exchange (ETDEWEB)
He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)
2015-12-28
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
Pestotnik, R.; Križan, P.; Korpar, S.; Iijima, T.
2008-09-01
The use of a sequence of aerogel radiators with different refractive indices in a proximity focusing Cherenkov ring imaging detector has been shown to improve the resolution of the Cherenkov angle. In order to obtain further information on the capabilities of such a detector, a maximum-likelihood analysis has been performed on simulated data, with the simulation being appropriate for the upgraded Belle detector. The results show that by using a sequence of two aerogel layers with different refractive indices, the K/π separation efficiency is improved in the kinematic region above 3 GeV/ c. In the low momentum region, the focusing configuration (with n1 and n2 chosen such that the Cherenkov rings from different aerogel layers at 4 GeV/ c overlap) shows a better performance than the defocusing one (where the two Cherenkov rings are well separated).
Castrillón-Candás, Julio E.
2015-11-10
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.
DEFF Research Database (Denmark)
De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk
2013-01-01
We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at low...... SNR due to noise induced bias. The IQML cost function can be “denoised” by eliminating the noise contribution: the resulting algorithm, Denoised IQML (DIQML), gives consistent estimates and outperforms IQML. Furthermore, DIQML is asymptotically globally convergent and hence insensitive......, but requires a consistent initialization. We furthermore compare DIQML and PQML to the strategy of alternating minimization w.r.t. symbols and channel for solving DML (AQML). An asymptotic performance analysis, a complexity evaluation and simulation results are also presented. The proposed DIQML and PQML...
Kakade, Rohan; Walker, John G.; Phillips, Andrew J.
2016-08-01
Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/～kmeyer/wombat.html
Institute of Scientific and Technical Information of China (English)
AaziTakpaya; WeiGang
2003-01-01
Blind identification-blind equalization for finite Impulse Response(FIR)Multiple Input-Multiple Output(MIMO)channels can be reformulated as the problem of blind sources separation.It has been shown that blind identification via decorrelating sub-channels method could recover the input sources.The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators,which decorrelate the output signals of subchannels,and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix.In this paper,a new qpproximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed.The proposed method outperforms BIDS in the presence of additive white Garssian noise.
Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael
2016-07-26
We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. PMID:27463130
Slob W; Hendriksen CFM
1989-01-01
Dit rapport bespreekt de analyse van verdunningsreeksen met maximum likelihood, met als toepassing het in vitro serologisch toetsen van de werkzaamheid van bacteriele vaccins voor de mens. Met computersimulaties wordt aangetoond dat de maximum likelihood methode adequaat is voor de in het werkzaamheidsonderzoek gebruikelijke steekproefomvang. De relatie tussen de antitoxine respons en vaccinverdunning wordt goed beschreven met een rechte lijn op dubbele log-schaal binnen de gebruikelijke expe...
Deterministic approximation for the cover time of trees
Feige, Uriel
2009-01-01
We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n/epsilon, and hence remains polynomial in n also for epsilon = 1/n^{O(1)}. We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.
Directory of Open Access Journals (Sweden)
Salces Judit
2011-08-01
Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental
Tree expectation propagation for ml decoding of LDPC codes over the BEC
Salamanca Mino, Luis; Olmos, P. M.; Murillo-Fuentes, J. J.; Perez-Cruz, F
2013-01-01
We propose a decoding algorithm for LDPC codes that achieves the maximum likelihood (ML) solution over the bi- nary erasure channel (BEC). In this channel, the tree-structured expectation propagation (TEP) decoder improves the peeling decoder (PD) by processing check nodes of degree one and two. However, it does not achieve the ML solution, as the tree structure of the TEP allows only for approximate inference. In this paper, we provide the procedure to construct the structure needed for exac...
Wang, Kezhi
2014-10-01
Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
International Nuclear Information System (INIS)
A theoretical model to describe the photopeak shape function has been developed by introducing an instrument function, which is a convolution of the statistical fluctuation of the charge carriers and the stochastic process of escape of the charge-carrier collection by capture at trapping centers. The photopeak shape function is a convolution of the instrument function and a Poisson probability-density functional representation of a reduced random summing event. The functions have been tested by using three coaxial, high-purity Ge detectors of a conventional type. The parameters were estimated by the maximum-likelihood estimation method. The position indicating the incident photon energy appeared at the centroid of the intrinsic normal distribution. The most probable peak-height position is no more than a ''conventional'' one, though it is commonly used in spectroscopy. The theory predicts the photopeak shape of many photons by folding an input function of the subject. The theory provides standards for the detector and the detection system. (orig.)
International Nuclear Information System (INIS)
An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.
International Nuclear Information System (INIS)
It is argued that the choice between one-sided and two-sided confidence intervals must be made according to a rule prior to and independent of the data and it is shown that such a rule was found in principle by a statistician about half a century ago. The novel problem with unphysical estimates of a parameter in presence of background is solved in the realm of classical statistics by applying this rule and the principle of maximum likelihood. Optimal confidence intervals are given for the measurement of a bounded magnitude with normal errors, most effective in discriminating a signal next to the bound, and it is shown how to get them in any single case for a bounded discrete variable with background, in general and specifically for Poisson and binomial variables, with two examples of application. The upper limit provided by this method, when the data are consistent with no signal, does not decrease with unphysical estimates going far off the physical values, so removing the last claimed support of Bayesian inference in physics. Procedure are given extending the method to several parameters
Ludwig, Kip A.; Miriani, Rachel M.; Langhals, Nicholas B.; Marzullo, Timothy C.; Kipke, Daryl R.
2011-08-01
Brain-machine interface decoding algorithms need to be predicated on assumptions that are easily met outside of an experimental setting to enable a practical clinical device. Given present technological limitations, there is a need for decoding algorithms which (a) are not dependent upon a large number of neurons for control, (b) are adaptable to alternative sources of neuronal input such as local field potentials (LFPs), and (c) require only marginal training data for daily calibrations. Moreover, practical algorithms must recognize when the user is not intending to generate a control output and eliminate poor training data. In this paper, we introduce and evaluate a Bayesian maximum-likelihood estimation strategy to address the issues of isolating quality training data and self-paced control. Six animal subjects demonstrate that a multiple state classification task, loosely based on the standard center-out task, can be accomplished with fewer than five engaged neurons while requiring less than ten trials for algorithm training. In addition, untrained animals quickly obtained accurate device control, utilizing LFPs as well as neurons in cingulate cortex, two non-traditional neural inputs.
Directory of Open Access Journals (Sweden)
Kaiyu Wang
2014-01-01
Full Text Available This paper presents an efficient all digital carrier recovery loop (ADCRL for quadrature phase shift keying (QPSK. The ADCRL combines classic closed-loop carrier recovery circuit, all digital Costas loop (ADCOL, with frequency feedward loop, maximum likelihood frequency estimator (MLFE so as to make the best use of the advantages of the two types of carrier recovery loops and obtain a more robust performance in the procedure of carrier recovery. Besides, considering that, for MLFE, the accurate estimation of frequency offset is associated with the linear characteristic of its frequency discriminator (FD, the Coordinate Rotation Digital Computer (CORDIC algorithm is introduced into the FD based on MLFE to unwrap linearly phase difference. The frequency offset contained within the phase difference unwrapped is estimated by the MLFE implemented just using some shifter and multiply-accumulate units to assist the ADCOL to lock quickly and precisely. The joint simulation results of ModelSim and MATLAB show that the performances of the proposed ADCRL in locked-in time and range are superior to those of the ADCOL. On the other hand, a systematic design procedure based on FPGA for the proposed ADCRL is also presented.
Directory of Open Access Journals (Sweden)
M. F. Müller
2015-01-01
Full Text Available We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML framework generates the best linear unbiased predictor (BLUP of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged and Austria (densely gauged, where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.
Directory of Open Access Journals (Sweden)
K. Seshadri Sastry
2013-06-01
Full Text Available This paper presents Adaptive Population Sizing Genetic Algorithm (AGA assisted Maximum Likelihood (ML estimation of Orthogonal Frequency Division Multiplexing (OFDM symbols in the presence of Nonlinear Distortions. The proposed algorithm is simulated in MATLAB and compared with existing estimation algorithms such as iterative DAR, decision feedback clipping removal, iteration decoder, Genetic Algorithm (GA assisted ML estimation and theoretical ML estimation. Simulation results proved that the performance of the proposed AGA assisted ML estimation algorithm is superior compared with the existing estimation algorithms. Further the computational complexity of GA assisted ML estimation increases with increase in number of generations or/and size of population, in the proposed AGA assisted ML estimation algorithm the population size is adaptive and depends on the best fitness. The population size in GA assisted ML estimation is fixed and sufficiently higher size of population is taken to ensure good performance of the algorithm but in proposed AGA assisted ML estimation algorithm the size of population changes as per requirement in an adaptive manner thus reducing the complexity of the algorithm.
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-01-01
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-04-20
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.
International Nuclear Information System (INIS)
In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data
International Nuclear Information System (INIS)
Maximum likelihood estimation (MLE) is presented as a statistical tool to evaluate the contribution of measurement error to any measurement series where the same quantity is measured using different independent methods. The technique was tested against artificial data sets; generated for values of underlying variation in the quantity and measurement error between 0.5 mm and 3 mm. In each case the simulation parameters were determined within 0.1 mm. The technique was applied to analyzing external random positioning errors from positional audit data for 112 pelvic radiotherapy patients. Patient position offsets were measured using portal imaging analysis and external body surface measures. Using MLE to analyze all methods in parallel it was possible to ascertain the measurement error for each method and the underlying positional variation. In the (AP / Lat / SI) directions the standard deviations of the measured patient position errors from portal imaging were (3.3 mm / 2.3 mm / 1.9 mm), arising from underlying variations of (2.7 mm / 1.5 mm / 1.4 mm) and measurement uncertainties of (1.8 mm / 1.8 mm / 1.3 mm), respectively. The measurement errors agree well with published studies. MLE used in this manner could be applied to any study in which the same quantity is measured using independent methods. (paper)
Müller, M. F.; Thompson, S. E.
2015-01-01
We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.
A Factor 3/2 Approximation for Generalized Steiner Tree Problem with Distances One and Two
Berman, Piotr; Zelikovsky, Alex
2008-01-01
We design a 3/2 approximation algorithm for the Generalized Steiner Tree problem (GST) in metrics with distances 1 and 2. This is the first polynomial time approximation algorithm for a wide class of non-geometric metric GST instances with approximation factor below 2.
Using trees to compute approximate solutions to ordinary differential equations exactly
Grossman, Robert
1991-01-01
Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.
International Nuclear Information System (INIS)
Maximum-likelihood equations are presented for estimates of the Doppler frequency (speed) and other unknown parameters of signals of laser Doppler anemometers and lidars operating in the one-particle-scattering mode. Shot noise was assumed to be the main interfering factor of the problem. The error correlation matrix was calculated and the Rao - Cramer bounds were determined. The results are confirmed by the computer simulation of the Doppler signal and the numerical solution of the maximum-likelihood equations for the Doppler frequency. The obtained estimate is unbiased, and its dispersion coincides with the Rao-Cramer bound. (laser applications and other topics in quantum electronics)
DEFF Research Database (Denmark)
Guerrero Gonzalez, Neil; Zibar, Darko; Yu, Xianbin;
2008-01-01
Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission.......Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission....
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Energy Technology Data Exchange (ETDEWEB)
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Evaluating the maximum likelihood method for detecting short-term variability of AGILE γ-ray sources
Bulgarelli, A.; Chen, A. W.; Tavani, M.; Gianotti, F.; Trifoglio, M.; Contessi, T.
2012-04-01
Context. The AGILE space mission (whose instrument is sensitive to the energy ranges 18-60 keV, and 30 MeV-50 GeV) has been operating since 2007. Assessing the statistical significance of the time variability of γ-ray sources above 100 MeV is a primary task of the AGILE data analysis. In particular, it is important to verify the instrument sensitivity in terms of Poisson modeling of the data background, and to determine the post-trial confidence of detections. Aims: The goals of this work are: (i) to evaluate the distributions of the likelihood ratio test for both "empty" fields and regions of the Galactic plane, and (ii) to calculate the probability of false detections over multiple time intervals. Methods: We describe in detail the techniques used to search for short-term variability in the AGILE γ-ray source database. We describe the binned maximum likelihood method used for the analysis of AGILE data, and the numerical simulations that support the characterization of the statistical analysis. We apply our method to both Galactic and extragalactic transients, and provide a few examples. Results: After checking the reliability of the statistical description tested with the real AGILE data, we obtain the distribution of p-values for blind and specific source searches. We apply our results to the determination of the post-trial statistical significance of detections of transient γ-ray sources in terms of pre-trial values. Conclusions: The results of our analysis allow a precise determination of the post-trial significance of γ-ray sources detected by AGILE.
Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.
2016-04-01
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.
Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.
2016-02-01
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Directory of Open Access Journals (Sweden)
Kaarina Matilainen
Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
International Nuclear Information System (INIS)
131I is a short-lived radionuclide (T1/2 =8.04d) which decays by beta emission producing significant yields of photons of energies 0.364 MeV (82%) and 0.637 MeV (7%). In the present investigations, follow-up measurements were made by in-vivo monitoring technique for an individual who was internally contaminated with 131I due to injection. The measurements were carried out on seven different occasions to find retention profile of thyroid activity. The intake was estimated by using maximum likelihood method (MLM) assuming that the measurement data to be log-normally distributed (LND). The CF of WBM estimated from ANIP was 453.48 cpm/kBq at a distance of 20 cm. Intake computed from MLM was 89.3 kBq for injection route and the committed effective dose was 1.96 mSv. The auto correlation and chi-square test was performed with a scattering factor value of 1.2 as suggested by ICRP. The z value of autocorrelation test was obtained as 0.1304 which corresponds to P value of 0.449. Chi-square value obtained was 1.93 which corresponds to P value of 0.85, which was more than chosen level of significance (0.05) implying the adequacy of fit. This explains that the measurement data follows a LND. The scattering factor of log normal distribution can be estimated from follow-up measurement of such accidental scenarios. The paper presents the follow up monitoring data of thyroidal burden of an individual and gives a methodology for estimating the normalized Intake/CED using MLM and tested by statistical parameters. Measured retained thyroidal data on different days following the intake has closely fitted to the ICRP predicted retained activity
DEFF Research Database (Denmark)
Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben;
2014-01-01
Objective: To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating...
Directory of Open Access Journals (Sweden)
Thanh-Son Tran
2012-05-01
Full Text Available Abstract Background The serovars Enteritidis and Typhimurium of the Gram-negative bacterium Salmonella enterica are significant causes of human food poisoning. Fowl carrying these bacteria often show no clinical disease, with detection only established post-mortem. Increased resistance to the carrier state in commercial poultry could be a way to improve food safety by reducing the spread of these bacteria in poultry flocks. Previous studies identified QTLs for both resistance to carrier state and resistance to Salmonella colonization in the same White Leghorn inbred lines. Until now, none of the QTLs identified was common to the two types of resistance. All these analyses were performed using the F2 inbred or backcross option of the QTLExpress software based on linear regression. In the present study, QTL analysis was achieved using Maximum Likelihood with QTLMap software, in order to test the effect of the QTL analysis method on QTL detection. We analyzed the same phenotypic and genotypic data as those used in previous studies, which were collected on 378 animals genotyped with 480 genome-wide SNP markers. To enrich these data, we added eleven SNP markers located within QTLs controlling resistance to colonization and we looked for potential candidate genes co-localizing with QTLs. Results In our case the QTL analysis method had an important impact on QTL detection. We were able to identify new genomic regions controlling resistance to carrier-state, in particular by testing the existence of two segregating QTLs. But some of the previously identified QTLs were not confirmed. Interestingly, two QTLs were detected on chromosomes 2 and 3, close to the locations of the major QTLs controlling resistance to colonization and to candidate genes involved in the immune response identified in other, independent studies. Conclusions Due to the lack of stability of the QTLs detected, we suggest that interesting regions for further studies are those that were
Approximation Algorithms for Optimization Problems in Graphs with Super logarithmic Tree width
DEFF Research Database (Denmark)
Czumaj, Artur; Halldorsson, Marcús Mar; Lingas, Andrzej;
2005-01-01
We present a generic scheme for approximating NP-hard problems on graphs of treewidth k=ω(logn) . When a tree-decomposition of width ℓ is given, the scheme typically yields an ℓ/logn -approximation factor; otherwise, an extra logk factor is incurred. Our method applies to several basic subgraph a...
Approximation Algorithm for Bottleneck Steiner Tree Problem in the Euclidean Plane
Institute of Scientific and Technical Information of China (English)
Zi-Mao Li; Da-Ming Zhu; Shao-Han Ma
2004-01-01
A special case of the bottleneck Steiner tree problem in the Euclidean plane was considered in this paper. The problem has applications in the design of wireless communication networks, multifacility location, VLSI routing and network routing. For the special case which requires that there should be no edge connecting any two Steiner points in the optimal solution, a 3-restricted Steiner tree can be found indicating the existence of the performance ratio √2. In this paper, the special case of the problem is proved to be NP-hard and cannot be approximated within ratio √2. First a simple polynomial time approximation algorithm with performance ratio √3 is presented. Then based on this algorithm and the existence of the 3-restricted Steiner tree, a polynomial time approximation algorithm with performance ratio-√2 + ε is proposed, for any ε＞0.
Sue, M. K.
1981-01-01
Models to characterize the behavior of the Deep Space Network (DSN) Receiving System in the presence of a radio frequency interference (RFI) are considered. A simple method to evaluate the telemetry degradation due to the presence of a CW RFI near the carrier frequency for the DSN Block 4 Receiving System using the maximum likelihood convolutional decoding assembly is presented. Analytical and experimental results are given.
Farrar, G R; Abu-Zayyad, T; Amann, J F; Archbold, G; Atkins, R; Bellido, J A; Belov, K; Belz, J W; Ben-Zvi, S Y; Bergman, D R; Boyer, J H; Burt, G W; Cao, Z; Clay, R W; Connolly, B M; Dawson, B R; Deng, W; Fedorova, Y; Findlay, J; Finley, C B; Hanlon, W F; Hoffman, C M; Holzscheiter, M H; Hughes, G A; Jui, C C H; Kim, K; Kirn, M A; Knapp, B C; Loh, E C; Maestas, M M; Manago, N; Mannel, E J; Marek, L J; Martens, K; Matthews, J A J; Matthews, J N; O'Neill, A; Painter, C A; Perera, L P; Reil, K; Riehle, R; Roberts, M D; Sasaki, M; Schnetzer, S R; Seman, M; Simpson, K M; Sinnis, G; Smith, J D; Snow, R; Sokolsky, P; Song, C; Springer, R W; Stokes, B T; Thomas, J R; Thomas, S B; Thomson, G B; Tupa, D; Westerhoff, S; Wiencke, L R; Zech, A
2004-01-01
We present the results of a search for cosmic ray point sources at energies above 40 EeV in the combined data sets recorded by the AGASA and HiRes stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Approximate K-Nearest Neighbour Based Spatial Clustering Using K-D Tree
Directory of Open Access Journals (Sweden)
Mohammed Otair
2013-03-01
Full Text Available Different spatial objects that vary in their characteristics, such as molecular biology and geography, arepresented in spatial areas. Methods to organize, manage, and maintain those objects in a structuredmanner are required. Data mining raised different techniques to overcome these requirements. There aremany major tasks of data mining, but the mostly used task is clustering. Data set within the same clustershare common features that give each cluster its characteristics. In this paper, an implementation ofApproximate kNN-based spatial clustering algorithm using the K-d tree is proposed. The majorcontribution achieved by this research is the use of the k-d tree data structure for spatial clustering, andcomparing its performance to the brute-force approach. The results of the work performed in this paperrevealed better performance using the k-d tree, compared to the traditional brute-force approach.
Beyond the locally tree-like approximation for percolation on real networks
Radicchi, Filippo
2016-01-01
Theoretical attempts proposed so far to describe ordinary percolation processes on real-world networks rely on the locally tree-like ansatz. Such an approximation, however, holds only to a limited extent, as real graphs are often characterized by high frequencies of short loops. We present here a theoretical framework able to overcome such a limitation for the case of site percolation. Our method is based on a message passing algorithm that discounts redundant paths along triangles in the graph. We systematically test the approach on 98 real-world graphs and on synthetic networks. We find excellent accuracy in the prediction of the whole percolation diagram, with significant improvement with respect to the prediction obtained under the locally tree-like approximation. Residual discrepancies between theory and simulations do not depend on clustering and can be attributed to the presence of loops longer than three edges. We present also a method to account for clustering in bond percolation, but the improvement...
MIMO Detection for High-Order QAM Based on a Gaussian Tree Approximation
Goldberger, Jacobb; Leshem, Amir
2010-01-01
This paper proposes a new detection algorithm for MIMO communication systems employing high order QAM constellations. The factor graph that corresponds to this problem is very loopy; in fact, it is a complete graph. Hence, a straightforward application of the Belief Propagation (BP) algorithm yields very poor results. Our algorithm is based on an optimal tree approximation of the Gaussian density of the unconstrained linear system. The finite-set constraint is then applied to obtain a loop-fr...
Ipsen, Andreas; Ebbels, Timothy M D
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time-tasks that may be best addressed through engineering efforts.
APPROXIMATION OF VOLUME AND BRANCH SIZE DISTRIBUTION OF TREES FROM LASER SCANNER DATA
Directory of Open Access Journals (Sweden)
P. Raumonen
2012-09-01
Full Text Available This paper presents an approach for automatically approximating the above-ground volume and branch size distribution of trees from dense terrestrial laser scanner produced point clouds. The approach is based on the assumption that the point cloud is a sample of a surface in 3D space and the surface is locally like a cylinder. The point cloud is covered with small neighborhoods which conform to the surface. Then the neighborhoods are characterized geometrically and these characterizations are used to classify the points into trunk, branch, and other points. Finally, proper subsets are determined for cylinder ﬁtting using geometric characterizations of the subsets.
DEFF Research Database (Denmark)
Borg, Søren; Persson, U.; Jess, T.;
2010-01-01
cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...... observed data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery...... Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission, with a...
International Nuclear Information System (INIS)
An electromagnetic sampling calorimeter is under construction in IPNE Bucharest for the determination of the energy of cosmic ray muons in TeV range, consisting of lead (1 cm thick) absorber layer, alternating with scintillator (3 cm thick) layers. The possibility of the estimation of the energy of high-energy cosmic muons is scrutinized using simulations with GEANT code of the response of the detector (30 layers) to incident energies in the range 1-30 TeV. A Maximum Likelihood Method analysis is presented as a procedure to determine the muon energy, being applied to the detector response to the muons of discrete energy and to the muons distributed according to the cosmic ray spectrum. (author) 17 Figs., 2 Tabs., 15 Refs
DEFF Research Database (Denmark)
Borg, Søren; Persson, U.; Jess, T.;
2010-01-01
Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission...... data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery......, with a cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...
A conceptual approach to approximate tree root architecture in infinite slope models
Schmaltz, Elmar; Glade, Thomas
2016-04-01
Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic
Directory of Open Access Journals (Sweden)
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Jarmołowski, Wojciech; Łukasiak, Jacek
2015-12-01
The work investigates the spatial correlation of the data collected along orbital tracks of Mars Orbiter Laser Altimeter (MOLA) with a special focus on the noise variance problem in the covariance matrix. The problem of different correlation parameters in along-track and crosstrack directions of orbital or profile data is still under discussion in relation to Least Squares Collocation (LSC). Different spacing in along-track and transverse directions and anisotropy problem are frequently considered in the context of this kind of data. Therefore the problem is analyzed in this work, using MOLA data samples. The analysis in this paper is focused on a priori errors that correspond to the white noise present in the data and is performed by maximum likelihood (ML) estimation in two, perpendicular directions. Additionally, correlation lengths of assumed planar covariance model are determined by ML and by fitting it into the empirical covariance function (ECF). All estimates considered together confirm substantial influence of different data resolution in along-track and transverse directions on the covariance parameters.
Huang, Jinxin; Hindman, Holly B; Rolland, Jannick P
2016-05-01
Dry eye disease (DED) is a common ophthalmic condition that is characterized by tear film instability and leads to ocular surface discomfort and visual disturbance. Advancements in the understanding and management of this condition have been limited by our ability to study the tear film secondary to its thin structure and dynamic nature. Here, we report a technique to simultaneously estimate the thickness of both the lipid and aqueous layers of the tear film in vivo using optical coherence tomography and maximum-likelihood estimation. After a blink, the lipid layer was rapidly thickened at an average rate of 10 nm/s over the first 2.5 s before stabilizing, whereas the aqueous layer continued thinning at an average rate of 0.29 μm/s of the 10 s blink cycle. Further development of this tear film imaging technique may allow for the elucidation of events that trigger tear film instability in DED. PMID:27128054
Lai, Jonathan Y.
1994-01-01
This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.
Gatherer, D.
2007-01-01
TreeAdder is a computer application that adds a leaf in all possible positions on a phylogenetic tree. The resulting set of trees represent a dataset appropriate for maximum likelihood calculation of the optimal tree. TreeAdder therefore provides a utility for what was previously a tedious and error-prone process.
Olivares, G.; Teferle, F. N.
2013-12-01
Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.
Chen, Chi-Farn; Son, Nguyen-Thanh; Chen, Cheng-Ru; Chang, Ly-Yu
2011-01-01
Rice is the most important economic crop in Vietnam's Mekong Delta (MD). It is the main source of employment and income for rural people in this region. Yearly estimates of rice growing areas and delineation of spatial distribution of rice crops are needed to devise agricultural economic plans and to ensure security of the food supply. The main objective of this study is to map rice cropping systems with respect to monitoring agricultural practices in the MD using time-series moderate resolution imaging spectroradiometer (MODIS) normalized difference vegetation index (NDVI) 250-m data. These time-series NDVI data were derived from the 8-day MODIS 250-m data acquired in 2008. Various spatial and nonspatial data were also used for accuracy verification. The method used in this study consists of the following three main steps: 1. filtering noise from the time-series NDVI data using wavelet transformation (Coiflet 4); 2. classification of rice cropping systems using parametric and nonparametric classification algorithms: the maximum likelihood classifier (MLC) and support vector machines (SVMs); and 3. verification of classification results using ground truth data and government rice statistics. Good results can be found using wavelet transformation for cleaning rice signatures. The results of classification accuracy assessment showed that the SVMs outperformed the MLC. The overall accuracy and Kappa coefficient achieved by the SVMs were 89.7% and 0.86, respectively, while those achieved by the MLC were 76.2% and 0.68, respectively. Comparison of the MODIS-derived areas obtained by the SVMs with the government rice statistics at the provincial level also demonstrated that the results achieved by the SVMs (R2 = 0.95) were better than the MLC (R2 = 0.91). This study demonstrates the effectiveness of using a nonparametric classification algorithm (SVMs) and time-series MODIS NVDI data for rice crop mapping in the Vietnamese MD.
Sasaki, Tomohiko; Kondo, Osamu
2016-09-01
Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years.
Karan, Shivesh Kishore; Samadder, Sukha Ranjan
2016-08-01
One objective of the present study was to evaluate the performance of support vector machine (SVM)-based image classification technique with the maximum likelihood classification (MLC) technique for a rapidly changing landscape of an open-cast mine. The other objective was to assess the change in land use pattern due to coal mining from 2006 to 2016. Assessing the change in land use pattern accurately is important for the development and monitoring of coalfields in conjunction with sustainable development. For the present study, Landsat 5 Thematic Mapper (TM) data of 2006 and Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data of 2016 of a part of Jharia Coalfield, Dhanbad, India, were used. The SVM classification technique provided greater overall classification accuracy when compared to the MLC technique in classifying heterogeneous landscape with limited training dataset. SVM exceeded MLC in handling a difficult challenge of classifying features having near similar reflectance on the mean signature plot, an improvement of over 11 % was observed in classification of built-up area, and an improvement of 24 % was observed in classification of surface water using SVM; similarly, the SVM technique improved the overall land use classification accuracy by almost 6 and 3 % for Landsat 5 and Landsat 8 images, respectively. Results indicated that land degradation increased significantly from 2006 to 2016 in the study area. This study will help in quantifying the changes and can also serve as a basis for further decision support system studies aiding a variety of purposes such as planning and management of mines and environmental impact assessment.
Karan, Shivesh Kishore; Samadder, Sukha Ranjan
2016-08-01
One objective of the present study was to evaluate the performance of support vector machine (SVM)-based image classification technique with the maximum likelihood classification (MLC) technique for a rapidly changing landscape of an open-cast mine. The other objective was to assess the change in land use pattern due to coal mining from 2006 to 2016. Assessing the change in land use pattern accurately is important for the development and monitoring of coalfields in conjunction with sustainable development. For the present study, Landsat 5 Thematic Mapper (TM) data of 2006 and Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data of 2016 of a part of Jharia Coalfield, Dhanbad, India, were used. The SVM classification technique provided greater overall classification accuracy when compared to the MLC technique in classifying heterogeneous landscape with limited training dataset. SVM exceeded MLC in handling a difficult challenge of classifying features having near similar reflectance on the mean signature plot, an improvement of over 11 % was observed in classification of built-up area, and an improvement of 24 % was observed in classification of surface water using SVM; similarly, the SVM technique improved the overall land use classification accuracy by almost 6 and 3 % for Landsat 5 and Landsat 8 images, respectively. Results indicated that land degradation increased significantly from 2006 to 2016 in the study area. This study will help in quantifying the changes and can also serve as a basis for further decision support system studies aiding a variety of purposes such as planning and management of mines and environmental impact assessment. PMID:27461425
Rößler, Andreas
2013-01-01
A general class of stochastic Runge-Kutta methods for the weak approximation of It\\^o and Stratonovich stochastic differential equations with a multi-dimensional Wiener process is introduced. Colored rooted trees are used to derive an expansion of the solution process and of the approximation process calculated with the stochastic Runge-Kutta method. A theorem on general order conditions for the coefficients and the random variables of the stochastic Runge-Kutta method is proved by rooted tre...
Energy Technology Data Exchange (ETDEWEB)
Driscoll, Donald D.; /Case Western Reserve U.
2004-01-01
first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.
International Nuclear Information System (INIS)
, with high vertical resolution, could be generated for many wells. This procedure permits to populate any well location with core-scale estimates of P and P and rock types facilitating the application of geostatistical characterization methods. The first step procedure was to discriminate rock types of similar depositional environment and/or reservoir quality (RQ) using a specific clustering technique. The approach implemented utilized a model-based, probabilistic clustering analysis procedure called GAMLS1,2,3,4 (Geologic Analysis via Maximum Likelihood System) which is based on maximum likelihood principles. During clustering, samples (data at each digitized depth from each well) are probabilistically assigned to a previously specified number of clusters with a fractional probability that varies between zero and one
Institute of Scientific and Technical Information of China (English)
张婷婷; 高金玲
2014-01-01
针对logistic回归中最大似然估计法的迭代算法求解困难的问题，从理论和实例运用的两个角度寻找到一种简便估计法，即经验logistic回归。分析结果表明，在样本容量很大的情况下经验logistic回归方法比最大似然估计方法更具备良好的科学性和实用性，并且两种方法对同一组资料的分析结果一致，而经验logistic回归更简单，此结果对于实际工作者来说非常重要。%In this paper , the empirical logistic regression method and the maximum likelihood estimation method were analyzed in detail by illustrating in theory , and the two methods were compared with correlation a-nalysis from scientific and practical .Analysis results show that , under the condition of the sample size is very big , empirical logistic regression method is better than maximum likelihood estimation method in respect of scientific and practical , at the same time , they are the same consequence .However , empirical logistic regression method is easier than maximum likelihood estimation method , which is very important to practical workers .
Compressive Imaging using Approximate Message Passing and a Markov-Tree Prior
Som, Subhojit; Schniter, Philip
2011-01-01
We propose a novel algorithm for compressive imaging that exploits both the sparsity and persistence across scales found in the 2D wavelet transform coefficients of natural images. Like other recent works, we model wavelet structure using a hidden Markov tree (HMT) but, unlike other works, ours is based on loopy belief propagation (LBP). For LBP, we adopt a recently proposed "turbo" message passing schedule that alternates between exploitation of HMT structure and exploitation of compressive-...
Approximate group context tree: applications to dynamic programming and dynamic choice models
Belloni, Alexandre
2011-01-01
The paper considers a variable length Markov chain model associated with a group of stationary processes that share the same context tree but potentially different conditional probabilities. We propose a new model selection and estimation method, develop oracle inequalities and model selection properties for the estimator. These results also provide conditions under which the use of the group structure can lead to improvements in the overall estimation. Our work is also motivated by two methodological applications: discrete stochastic dynamic programming and dynamic discrete choice models. We analyze the uniform estimation of the value function for dynamic programming and the uniform estimation of average dynamic marginal effects for dynamic discrete choice models accounting for possible imperfect model selection. We also derive the typical behavior of our estimator when applied to polynomially $\\beta$-mixing stochastic processes. For parametric models, we derive uniform rate of convergence for the estimation...
Institute of Scientific and Technical Information of China (English)
高艳普; 王向东; 王冬青
2015-01-01
An algorithm of maximum likelihood method for parameters estimate was presented aimed at multivariable controlled autoregressive moving average (CARMA-like).The algorithm transform the CARMA-like system into m identification models (m is the output numbers),each of which only had a parameter vector which needed to be esti-mated,and then through maximum likelihood method for estimating parameter vectors of each identification model,and all parameters estimate of the system were obtained.Simulation results verified the effectiveness of the proposed algo-rithm.%提出了一种针对多变量受控自回归滑动平均（controlled autoregressive moving average system-like，CARMA-like）系统的极大似然参数估计算法。将 CARMA-like 系统分解成为 m 个辨识模型（m 是输出量的个数），使每一个辨识模型仅包含一个需要估计的参数向量，通过极大似然方法估计每个辨识模型的参数向量，从而得到整个系统的参数估计值。仿真结果验证了该算法的有效性。
Institute of Scientific and Technical Information of China (English)
房祥忠; 陈家鼎
2011-01-01
强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.
Maximum-likelihood algorithm for quantum tomography
International Nuclear Information System (INIS)
Optical homodyne tomography is discussed in the context of classical image processing. Analogies between these two fields are traced and used to formulate an iterative numerical algorithm for reconstructing the Wigner function from homodyne statistics. (Author)
Maximum likelihood estimation for the bombing model
Lieshout, M.N.M. van; Zwet, E.W. van
2000-01-01
Perhaps the best known example of a random set is the Boolean model. It is the union of `grains' such as discs, squares or triangles which are placed at the points of a Poisson point process. The Poisson points are called the `germs'. We are interested in estimating the intensity, say lambda, of the
Al-Khaja, Nawal
2007-01-01
This is a thematic lesson plan for young learners about palm trees and the importance of taking care of them. The two part lesson teaches listening, reading and speaking skills. The lesson includes parts of a tree; the modal auxiliary, can; dialogues and a role play activity.
Institute of Scientific and Technical Information of China (English)
武岩波; 朱敏
2016-01-01
Considering the characteristics of underwater acoustic communication, random linear fountain codes with maximum likelihood decoding are studied to correct erasure errors in the short packet transmission. In existing maximum likelihood decoding methods, processing begins when all the necessary blocks are available, resulting to the unacceptable decoding delay. An increment Gaussian elimination method is proposed to decrease the decoding delay by utilizing the time-slots of every block. The computation complexity is analyzed based on the principle of the probability distribution of the summation of binary random variables. The real-time ability of the proposed method is verified on the low-cost DSP chip for the underwater acoustic modem. The method is applicable to underwater transmissions of images, and sense data.%针对水声通信特点，研究随机线性喷泉码及最大似然译码，在分块数较小的包传输中纠正删除错误。传统的最大似然译码为整包统一处理，译码延迟大。该文提出一种逐行累增的高斯消去方法，将译码过程划分到各块到达时隙中执行，利用二进制分布求和的概率公式对单块到达所需计算量进行分析。在实际水声通信处理平台上进行了验证，满足实时计算需求，可用于水下图像、传感器数据等的传输。
Hash Dijkstra Algorithm for Approximate Minimal Spanning Tree%近似最小树的哈希Dijkstra算法
Institute of Scientific and Technical Information of China (English)
李玉鑑; 李厚君
2011-01-01
为了解决Dijkstra（DK）算法对大规模数据构造最小树时效率不高的问题，结合局部敏感哈希映射（LSH），针对欧氏空间中的样本，提出了一种近似最小树的快速生成算法，即LSHDK算法．该算法通过减少查找近邻点的计算量提高运行速度．计算实验结果表明，当数据规模大于50000个点时，LSHDK算法比DK算法速度更快且所计算的近似最小树在维数较低时误差非常小（0．00—0．05％），在维数较高时误差通常为0．1％～3．0％．%In order to overcome the low efficiency of Dijkstra （DK） algorithm in constructing Minimal Spanning Trees （MST） for large-scale datasets, this paper uses Locality Sensitive Hashing （LSH） to design a fast approximate algorithm, namely, LSHDK algorithm, to build MST in Euclidean space. The LSHDK algorithm can achieve a faster speed with small error by reducing computations in search of nearest points. Computational experiments show that it runs faster than the DK algorithm on datasets of more than 50 000 points, while the resulting approximate MST has an small error which is very small （0.00 - 0.05% ） in low dimension, and generally between 0. 1% -3.0% in high dimension.
Directory of Open Access Journals (Sweden)
Gustavo M. Souza
2004-09-01
Full Text Available Approximate Entropy (ApEn, a model-independent statistics to quantify serial irregularities, was used to evaluate changes in sap flow temporal dynamics of two tropical species of trees subjected to water deficit. Water deficit induced a decrease in sap flow of G. ulmifolia, whereas C. legalis held stable their sap flow levels. Slight increases in time series complexity were observed in both species under drought condition. This study showed that ApEn could be used as a helpful tool to assess slight changes in temporal dynamics of physiological data, and to uncover some patterns of plant physiological responses to environmental stimuli.Entropia Aproximada (ApEn, um modelo estatístico independente para quantificar irregularidade em séries temporais, foi utilizada para avaliar alterações na dinâmica temporal do fluxo de seiva em duas espécies arbóreas tropicais submetidas à deficiência hídrica. A deficiência hídrica induziu uma grande redução no fluxo de seiva em G. ulmifolia, enquanto que na espécie C. legalis manteve-se estável. A complexidade das séries temporais foi levemente aumentada sob deficiência hídrica. O estudo mostrou que ApEn pode ser usada como um método para detectar pequenas alterações na dinâmica temporal de dados fisiológicos, e revelar alguns padrões de respostas fisiológicas a estímulos ambientais.
广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models
Institute of Scientific and Technical Information of China (English)
张戈; 吴黎军
2013-01-01
研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.
Epstein, Henri
2016-01-01
An algebraic formalism, developped with V.~Glaser and R.~Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large category of space-times.
Institute of Scientific and Technical Information of China (English)
黄维辉; 熊翱
2013-01-01
For many application areas, the efifciency of multidimensional data processing is a key factor affecting their development. In particular, similarity search is used in many ifelds, such as data mining, big data analysis, digital multimedia etc. However, lots of index structures cannot avoid the“dimensionality curse”, when number of dimensions is very large. RAKDB-Tree uses partitioning method to divide data regions and create approximation ifles. Then RAKDB-Tree indexes the approximations with the improved method of KDB-Tree. RAKDB-Tree is an automatically adjust and optimize tree index structure. Experimental results show that the RAKDB-Tree has a promising improvement in performance.%多维数据的处理已经成为影响很多领域发展的关键因素，特别是多维数据的相似性查询已经被用在很多领域中。当数据维度很大的时候，大多数索引结构处理的性能下降，这现象被称为“维度灾难”。针对多维度灾难，RAKDB-Tree是本文提出的一种高效处理多维数据的索引结构。该索引结构首先把数据空间划分为子空间，然后使用改进的KDB-Tree对子空间建立索引。RAKDB-Tree的查询、插入、删除等算法使得，索引结构一直保持较优状态。实验结果表明，RAKDB-Tree能够很好解决因为数据维度增加而带来的各种问题。
Institute of Scientific and Technical Information of China (English)
吴鑫育; 周海林; 汪寿阳; 马超群
2013-01-01
The stochastic volatility model with a leverage effect (SV-L) has received a great deal of attention in the financial econometrics literature. However, estimation of the SV-L model poses difficulties. In this pa-per, we develop a method for maximum likelihood (ML) estimation of the SV-L model based on the efficient importance sampling (EIS) technique. Monte Carlo (MC) simulations are presented to examine the accuracy and small sample properties of our proposed method. The experimental results show that the EIS-ML method performs very well. Finally, the EIS-ML method is illustrated with real data. We apply the EIS-ML method of SV-L model to the daily log returns of SSE and SZSE Component Index. Empirical results show that a high persistence of volatility and a significant leverage effect exist in China stock market.%杠杆随机波动率(SV-L)模型在金融计量学文献中已经引起了广泛的关注,然而,它的参数估计一直是一个难点.本文基于有效重要性抽样(EIS)技巧,给出了SV-L模型的极大似然(ML)估计方法.为了检验提出的EIS-ML方法的精确性以及小样本性质,构建了蒙特卡罗(MC)模拟实验.结果表明,EIS-ML方法是非常准确和有效的.最后,将EIS-ML方法应用于实际数据,选取上证和深证综合指数的日对数收益率数据为研究样本,利用SV-L模型对中国股市进行了实证分析.结果表明,中国股市具有很强的波动持续性,并且存在显著的杠杆效应.
Institute of Scientific and Technical Information of China (English)
魏子翔; 崔嵬; 李霖; 吴爽; 吴嗣亮
2015-01-01
The scheme which is based on the Digital Delay Locked Loop (DDLL), Frequency Locked Loop (FLL), and Phase Locked Loop (PLL) is implemented in the microwave radar for spatial rendezvous and docking, and the delay, frequency and Direction Of Arrival (DOA) estimations of the incident direct-sequence spread spectrum signal transmitted by cooperative target are obtained. Yet the DDLL, FLL, and PLL (DFP) based scheme has not made full use of the received signal. For this reason, a novel Maximum Likelihood Estimation (MLE) Based Tracking (MLBT) algorithm with a low computational burden is proposed. The feature that the gradients of cost function are proportional to parameter errors is employed to design discriminators of parameter errors. Then three tracking loops are set up to provide the parameter estimations. In the following section, the variance characteristics of discriminators are investigated, and the low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are given for the MLBT algorithm. Finally, the simulations and computational efficiency analysis are provided. The low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are verified. Additionally, it is also shown that the MLBT algorithm achieves better performances in terms of estimators accuracy than those of the DFP based scheme with a limited increase in computational burden.%空间交会对接微波雷达采用基于延迟锁定环(DDLL)、锁频环(FLL)和锁相环(PLL)的算法处理合作目标转发的直接序列扩频信号,获得入射信号的时延、频率及波达角(DOA)估计.针对当前基于DDLL, FLL和PLL(DFP)的算法没有充分利用接收信号有效信息的问题,该文提出一种基于极大似然估计(MLE)的低代价闭环跟踪(MLBT)算法.该算法利用代价函数的梯度正比于参数误差的特性,设计了参数误差鉴别器.在此基础上给出了相应的扩频信号多参数跟踪环路.分析并验证了鉴别器的方差特性,
Divergence date estimation and a comprehensive molecular tree of extant cetaceans.
McGowen, Michael R; Spaulding, Michelle; Gatesy, John
2009-12-01
Cetaceans are remarkable among mammals for their numerous adaptations to an entirely aquatic existence, yet many aspects of their phylogeny remain unresolved. Here we merged 37 new sequences from the nuclear genes RAG1 and PRM1 with most published molecular data for the group (45 nuclear loci, transposons, mitochondrial genomes), and generated a supermatrix consisting of 42,335 characters. The great majority of these data have never been combined. Model-based analyses of the supermatrix produced a solid, consistent phylogenetic hypothesis for 87 cetacean species. Bayesian analyses corroborated odontocete (toothed whale) monophyly, stabilized basal odontocete relationships, and completely resolved branching events within Mysticeti (baleen whales) as well as the problematic speciose clade Delphinidae (oceanic dolphins). Only limited conflicts relative to maximum likelihood results were recorded, and discrepancies found in parsimony trees were very weakly supported. We utilized the Bayesian supermatrix tree to estimate divergence dates among lineages using relaxed-clock methods. Divergence estimates revealed rapid branching of basal odontocete lineages near the Eocene-Oligocene boundary, the antiquity of river dolphin lineages, a Late Miocene radiation of balaenopteroid mysticetes, and a recent rapid radiation of Delphinidae beginning approximately 10 million years ago. Our comprehensive, time-calibrated tree provides a powerful evolutionary tool for broad-scale comparative studies of Cetacea. PMID:19699809
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
Institute of Scientific and Technical Information of China (English)
史海芳; 李树有; 姬永刚
2008-01-01
For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.
Random Tree-Puzzle leads to the Yule-Harding distribution.
Vinh, Le Sy; Fuehrer, Andrea; von Haeseler, Arndt
2011-02-01
Approaches to reconstruct phylogenies abound and are widely used in the study of molecular evolution. Partially through extensive simulations, we are beginning to understand the potential pitfalls as well as the advantages of different methods. However, little work has been done on possible biases introduced by the methods if the input data are random and do not carry any phylogenetic signal. Although Tree-Puzzle (Strimmer K, von Haeseler A. 1996. Quartet puzzling: a quartet maximum-likelihood method for reconstructing tree topologies. Mol Biol Evol. 13:964-969; Schmidt HA, Strimmer K, Vingron M, von Haeseler A. 2002. Tree-Puzzle: maximum likelihood phylogenetic analysis using quartets and parallel computing. Bioinformatics 18:502-504) has become common in phylogenetics, the resulting distribution of labeled unrooted bifurcating trees when data do not carry any phylogenetic signal has not been investigated. Our note shows that the distribution converges to the well-known Yule-Harding distribution. However, the bias of the Yule-Harding distribution will be diminished by a tiny amount of phylogenetic information. maximum likelihood, phylogenetic reconstruction, Tree-Puzzle, tree distribution, Yule-Harding distribution.
Pyron, R Alexander; Hendry, Catriona R; Chou, Vincent M; Lemmon, Emily M; Lemmon, Alan R; Burbrink, Frank T
2014-12-01
Next-generation genomic sequencing promises to quickly and cheaply resolve remaining contentious nodes in the Tree of Life, and facilitates species-tree estimation while taking into account stochastic genealogical discordance among loci. Recent methods for estimating species trees bypass full likelihood-based estimates of the multi-species coalescent, and approximate the true species-tree using simpler summary metrics. These methods converge on the true species-tree with sufficient genomic sampling, even in the anomaly zone. However, no studies have yet evaluated their efficacy on a large-scale phylogenomic dataset, and compared them to previous concatenation strategies. Here, we generate such a dataset for Caenophidian snakes, a group with >2500 species that contains several rapid radiations that were poorly resolved with fewer loci. We generate sequence data for 333 single-copy nuclear loci with ∼100% coverage (∼0% missing data) for 31 major lineages. We estimate phylogenies using neighbor joining, maximum parsimony, maximum likelihood, and three summary species-tree approaches (NJst, STAR, and MP-EST). All methods yield similar resolution and support for most nodes. However, not all methods support monophyly of Caenophidia, with Acrochordidae placed as the sister taxon to Pythonidae in some analyses. Thus, phylogenomic species-tree estimation may occasionally disagree with well-supported relationships from concatenated analyses of small numbers of nuclear or mitochondrial genes, a consideration for future studies. In contrast for at least two diverse, rapid radiations (Lamprophiidae and Colubridae), phylogenomic data and species-tree inference do little to improve resolution and support. Thus, certain nodes may lack strong signal, and larger datasets and more sophisticated analyses may still fail to resolve them.
Closed form maximum likelihood estimator of conditional random fields
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter; Wombacher, Andreas
2013-01-01
Training Conditional Random Fields (CRFs) can be very slow for big data. In this paper, we present a new training method for CRFs called {\\em Empirical Training} which is motivated by the concept of co-occurrence rate. We show that the standard training (unregularized) can have many maximum likeliho
Constrained maximum likelihood modal parameter identification applied to structural dynamics
El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim
2016-05-01
A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.
Maximum likelihood, least squares and penalized least squares for PET
International Nuclear Information System (INIS)
The EM algorithm is the basic approach used to maximize the log likelihood objective function for the reconstruction problem in PET. The EM algorithm is a scaled steepest ascent algorithm that elegantly handles the nonnegativity constraints of the problem. The authors show that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach. The experiments suggest that one can cut the computation by about a factor of 3 by using this technique. The results also apply to various penalized least squares functions which might be used to produce a smoother image
Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2004-01-01
of simulated and in vivo data from the carotid artery. The estimator is meant for two-dimensional (2-D) color flow imaging. The resulting mathematical relation for the estimator consists of two terms. The first term performs a cross-correlation analysis on the signal segment in the radio frequency (RF......)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...
Maximum likelihood positioning and energy correction for scintillation detectors
Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten
2016-02-01
An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.
GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS
S. Sridevi; Nirmala, S.
2013-01-01
Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed fil...
DeGiorgio, Michael; Degnan, James H
2014-01-01
To infer species trees from gene trees estimated from phylogenomic data sets, tractable methods are needed that can handle dozens to hundreds of loci. We examine several computationally efficient approaches-MP-EST, STAR, STEAC, STELLS, and STEM-for inferring species trees from gene trees estimated using maximum likelihood (ML) and Bayesian approaches. Among the methods examined, we found that topology-based methods often performed better using ML gene trees and methods employing coalescent times typically performed better using Bayesian gene trees, with MP-EST, STAR, STEAC, and STELLS outperforming STEM under most conditions. We examine why the STEM tree (also called GLASS or Maximum Tree) is less accurate on estimated gene trees by comparing estimated and true coalescence times, performing species tree inference using simulations, and analyzing a great ape data set keeping track of false positive and false negative rates for inferred clades. We find that although true coalescence times are more ancient than speciation times under the multispecies coalescent model, estimated coalescence times are often more recent than speciation times. This underestimation can lead to increased bias and lack of resolution with increased sampling (either alleles or loci) when gene trees are estimated with ML. The problem appears to be less severe using Bayesian gene-tree estimates.
Indian Academy of Sciences (India)
Richa Sharma; Aniruddha Ghosh; P K Joshi
2013-10-01
In this study, an attempt has been made to develop a decision tree classification (DTC) algorithm for classification of remotely sensed satellite data (Landsat TM) using open source support. The decision tree is constructed by recursively partitioning the spectral distribution of the training dataset using WEKA, open source data mining software. The classified image is compared with the image classified using classical ISODATA clustering and Maximum Likelihood Classifier (MLC) algorithms. Classification result based on DTC method provided better visual depiction than results produced by ISODATA clustering or by MLC algorithms. The overall accuracy was found to be 90% (kappa = 0.88) using the DTC, 76.67% (kappa = 0.72) using the Maximum Likelihood and 57.5% (kappa = 0.49) using ISODATA clustering method. Based on the overall accuracy and kappa statistics, DTC was found to be more preferred classification approach than others.
Institute of Scientific and Technical Information of China (English)
赵永翔; 王金诺; 高庆
2001-01-01
拓展经典极大似然法到Langer模型，提出了估计三参数、Langer和Basquin三种常用疲劳应力-寿命模型P-S-N曲线及其置信限的统一方法。方法用于处理极大似然法疲劳试验得到的S-N数据。该试验在特别关注的参考载荷试验一组试样，其余试样在不同载荷下试验。以参考载荷试验数据的统计参量为基础，按照每个模型中材料常数协同处于相同概率水平原则，将曲线表示为对数疲劳寿命均值和均方差线的广义形式，至多4个材料常数。曲线中的材料常数按极大似然原理采用数学规划法求出。45＃碳钢缺口试样(kt＝20）对称循环加载试验数据的分析说明了方法的有效性。分析还揭示合理模型有必要通过比较拟合效果、预计误差和应用安全性来确定。三参数模型的拟合效果最好，Langer模型稍差，Basquin模型较差。从拟合效果、预计误差和应用安全性角度，Basquin模型不适于描述该套数据。此外，经典极大似然法估计结果可能因受局部统计参量影响而给出非安全估计，有必要发展改进的可以最大限度减小这种影响的方法。%A unified classical maximum likelihood approach for estimating P-S-N curves of the three commonly used fatigue stress-life relations, namely three parameter, Langer and Basquin, is presented by extrapolating the classical maximum likelihood method to the Langer relation. This approach is applied to deal with the S-N data obtained from a so-called maximum likelihood method-fatigue test. In the test, a group of specimens are tested at a so-called reference load, which is specially taken care of by practice, and residual specimens are individually fatigued at different loads. The approach takes a basis of the local statistical parameters of the logarithms of fatigue lives at the reference load. According to an assumption that the material constants in each relation are concurrently in
Directory of Open Access Journals (Sweden)
Fernando Abad-Franch
Full Text Available BACKGROUND: Failure to detect a disease agent or vector where it actually occurs constitutes a serious drawback in epidemiology. In the pervasive situation where no sampling technique is perfect, the explicit analytical treatment of detection failure becomes a key step in the estimation of epidemiological parameters. We illustrate this approach with a study of Attalea palm tree infestation by Rhodnius spp. (Triatominae, the most important vectors of Chagas disease (CD in northern South America. METHODOLOGY/PRINCIPAL FINDINGS: The probability of detecting triatomines in infested palms is estimated by repeatedly sampling each palm. This knowledge is used to derive an unbiased estimate of the biologically relevant probability of palm infestation. We combine maximum-likelihood analysis and information-theoretic model selection to test the relationships between environmental covariates and infestation of 298 Amazonian palm trees over three spatial scales: region within Amazonia, landscape, and individual palm. Palm infestation estimates are high (40-60% across regions, and well above the observed infestation rate (24%. Detection probability is higher ( approximately 0.55 on average in the richest-soil region than elsewhere ( approximately 0.08. Infestation estimates are similar in forest and rural areas, but lower in urban landscapes. Finally, individual palm covariates (accumulated organic matter and stem height explain most of infestation rate variation. CONCLUSIONS/SIGNIFICANCE: Individual palm attributes appear as key drivers of infestation, suggesting that CD surveillance must incorporate local-scale knowledge and that peridomestic palm tree management might help lower transmission risk. Vector populations are probably denser in rich-soil sub-regions, where CD prevalence tends to be higher; this suggests a target for research on broad-scale risk mapping. Landscape-scale effects indicate that palm triatomine populations can endure deforestation
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Al-Atiyat, R M; Aljumaah, R S
2014-01-01
This study aimed to estimate evolutionary distances and to reconstruct phylogeny trees between different Awassi sheep populations. Thirty-two sheep individuals from three different geographical areas of Jordan and the Kingdom of Saudi Arabia (KSA) were randomly sampled. DNA was extracted from the tissue samples and sequenced using the T7 promoter universal primer. Different phylogenetic trees were reconstructed from 0.64-kb DNA sequences using the MEGA software with the best general time reverse distance model. Three methods of distance estimation were then used. The maximum composite likelihood test was considered for reconstructing maximum likelihood, neighbor-joining and UPGMA trees. The maximum likelihood tree indicated three major clusters separated by cytosine (C) and thymine (T). The greatest distance was shown between the South sheep and North sheep. On the other hand, the KSA sheep as an outgroup showed shorter evolutionary distance to the North sheep population than to the others. The neighbor-joining and UPGMA trees showed quite reliable clusters of evolutionary differentiation of Jordan sheep populations from the Saudi population. The overall results support geographical information and ecological types of the sheep populations studied. Summing up, the resulting phylogeny trees may contribute to the limited information about the genetic relatedness and phylogeny of Awassi sheep in nearby Arab countries.
Linear Time Approximation Schemes for the Gale-Berlekamp Game and Related Minimization Problems
Karpinski, Marek
2008-01-01
We design a linear time approximation scheme for the Gale-Berlekamp Switching Game and generalize it to a wider class of dense fragile minimization problems including the Nearest Codeword Problem (NCP) and Unique Games Problem. Further applications include, among other things, finding a constrained form of matrix rigidity and maximum likelihood decoding of an error correcting code. As another application of our method we give the first linear time approximation schemes for correlation clustering with a fixed number of clusters and its hierarchical generalization. Our results depend on a new technique for dealing with small objective function values of optimization problems and could be of independent interest.
Directory of Open Access Journals (Sweden)
César da Silva Chagas
2013-04-01
Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.O levantamento de solos é a principal fonte de informação espacial sobre solos para diferentes usos
Reversible polymorphism-aware phylogenetic models and their application to tree inference.
Schrempf, Dominik; Minh, Bui Quang; De Maio, Nicola; von Haeseler, Arndt; Kosiol, Carolin
2016-10-21
We present a reversible Polymorphism-Aware Phylogenetic Model (revPoMo) for species tree estimation from genome-wide data. revPoMo enables the reconstruction of large scale species trees for many within-species samples. It expands the alphabet of DNA substitution models to include polymorphic states, thereby, naturally accounting for incomplete lineage sorting. We implemented revPoMo in the maximum likelihood software IQ-TREE. A simulation study and an application to great apes data show that the runtimes of our approach and standard substitution models are comparable but that revPoMo has much better accuracy in estimating trees, divergence times and mutation rates. The advantage of revPoMo is that an increase of sample size per species improves estimations but does not increase runtime. Therefore, revPoMo is a valuable tool with several applications, from speciation dating to species tree reconstruction. PMID:27480613
Institute of Scientific and Technical Information of China (English)
Ziheng YANG
2004-01-01
众所周知,物种分化年代的估计对分子钟(进化速率恒定)假定很敏感.另一方面,在远缘物种(例如哺乳纲不同目的动物)的比较中,分子钟几乎总是不成立的.这样在估计分化时间时考虑不同进化区系的速率差异至为重要.最大似然法可以很自然地考虑这种速率差异,并且可以同时分析多个基因位点的资料以及同时利用多重化石校正数据.以前提出的似然法需要研究者将进化树的树枝按速率分组,本文提出一个近似方法以使这个过程自动化.本方法综合了以前的似然法、贝斯法及近似速率平滑法的一些特征.此外,还对算法加以改进,以适应综合数据分析时某些基因在某些物种中缺乏资料的情形.应用新提出的方法来分析马达加斯加的倭狐猴的分化年代,并与以前的似然法及贝斯法的分析进行了比较[动物学报50(4):645-656,2004].%Estimation of species divergence times is well-known to be sensitive to violation of the molecular clock assumption (rate constancy over time). However, the molecular clock is almost always violated in comparisons of distantly related species, such as different orders of mammals. Thus it is important to take into account different rates among lineages when divergence times are estimated. The maximum likelihood method provides a framework for accommodating rate variation and can naturally accommodate heterogeneous datasets from multiple loci and fossil calibrations at multiple nodes.Previous implementations of the likelihood method require the researcher to assign branches to different rate classes. In this paper, I implement a heuristic rate-smoothing algorithm (the AHRS algorithm) to automate the assignment of branches to rate groups. The method combines features of previous likelihood, Bayesian and rate-smoothing methods. The likelihood algorithm is also improved to accommodate missing sequences at some loci in the combined analysis. The new
Diestel, Reinhard
2015-01-01
We study an abstract notion of tree structure which generalizes tree-decompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked to graph-theoretical trees, these `tree sets' can provide a suitable formalization of tree structure also for infinite graphs, matroids, or set partitions, as well as for other discrete structures, such as order trees. In this first of two papers we introduce tree sets, establish their relation to graph and order trees, and show h...
Niven, Ivan
2008-01-01
This self-contained treatment originated as a series of lectures delivered to the Mathematical Association of America. It covers basic results on homogeneous approximation of real numbers; the analogue for complex numbers; basic results for nonhomogeneous approximation in the real case; the analogue for complex numbers; and fundamental properties of the multiples of an irrational number, for both the fractional and integral parts.The author refrains from the use of continuous fractions and includes basic results in the complex case, a feature often neglected in favor of the real number discuss
2006-01-01
This interactive tutorial presents the following concepts of Approximation Techniques: Methods of Weighted Residual (MWR), Weak Formulatioin, Piecewise Continuous Function, Galerkin Finite Element FormulationExplanations especially for mathematical statements are provided using mouseover the highlight equations. ME4613 Finite Element Methods
Approximate bayesian parameter inference for dynamical systems in systems biology
International Nuclear Information System (INIS)
This paper proposes to use approximate instead of exact stochastic simulation algorithms for approximate Bayesian parameter inference of dynamical systems in systems biology. It first presents the mathematical framework for the description of systems biology models, especially from the aspect of a stochastic formulation as opposed to deterministic model formulations based on the law of mass action. In contrast to maximum likelihood methods for parameter inference, approximate inference method- share presented which are based on sampling parameters from a known prior probability distribution, which gradually evolves toward a posterior distribution, through the comparison of simulated data from the model to a given data set of measurements. The paper then discusses the simulation process, where an over- view is given of the different exact and approximate methods for stochastic simulation and their improvements that we propose. The exact and approximate simulators are implemented and used within approximate Bayesian parameter inference methods. Our evaluation of these methods on two tasks of parameter estimation in two different models shows that equally good results are obtained much faster when using approximate simulation as compared to using exact simulation. (Author)
Approximate Representations and Approximate Homomorphisms
Moore, Cristopher
2010-01-01
Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities in terms of the ratio d / d_min where d_min is the dimension of the smallest nontrivial representation of G. As an application, we bound the extent to which a function f : G -> H can be an approximate homomorphism where H is another finite group. We show that if H's representations are significantly smaller than G's, no such f can be much more homomorphic than a random function. We interpret these results as showing that if G is quasirandom, that is, if d_min is large, then G cannot be embedded in a small number of dimensi...
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Schmidt, Wolfgang M
1980-01-01
"In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)
Approximate Matching of Hierarchial Data
DEFF Research Database (Denmark)
Augsten, Nikolaus
formally proof that the pq-gram index can be incrementally updated based on the log of edit operations without reconstructing intermediate tree versions. The incremental update is independent of the data size and scales to a large number of changes in the data. We introduce windowed pq-grams for the......-gram based distance between streets, introduces a global greedy matching that guarantees stable pairs, and links addresses that are stored with different granularity. The connector has been successfully tested with public administration databases. Our extensive experiments on both synthetic and real world......The goal of this thesis is to design, develop, and evaluate new methods for the approximate matching of hierarchical data represented as labeled trees. In approximate matching scenarios two items should be matched if they are similar. Computing the similarity between labeled trees is hard as in...
Ultrafast Approximation for Phylogenetic Bootstrap
Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt
2013-01-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and
NML Computation Algorithms for Tree-Structured Multinomial Bayesian Networks
Directory of Open Access Journals (Sweden)
Kontkanen Petri
2007-01-01
Full Text Available Typical problems in bioinformatics involve large discrete datasets. Therefore, in order to apply statistical methods in such domains, it is important to develop efficient algorithms suitable for discrete data. The minimum description length (MDL principle is a theoretically well-founded, general framework for performing statistical inference. The mathematical formalization of MDL is based on the normalized maximum likelihood (NML distribution, which has several desirable theoretical properties. In the case of discrete data, straightforward computation of the NML distribution requires exponential time with respect to the sample size, since the definition involves a sum over all the possible data samples of a fixed size. In this paper, we first review some existing algorithms for efficient NML computation in the case of multinomial and naive Bayes model families. Then we proceed by extending these algorithms to more complex, tree-structured Bayesian networks.
Relf, Diane
2009-01-01
The key aspects in planning a tree planting are determining the function of the tree, the site conditions, that the tree is suited to site conditions and space, and if you are better served by a container-grown. After the tree is planted according to the prescribed steps, you must irrigate as needed and mulch the root zone area.
Reinforcement Learning via AIXI Approximation
Veness, Joel; Hutter, Marcus; Silver, David
2010-01-01
This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains.
İrsoy, Ozan; Alpaydın, Ethem
2014-01-01
We discuss an autoencoder model in which the encoding and decoding functions are implemented by decision trees. We use the soft decision tree where internal nodes realize soft multivariate splits given by a gating function and the overall output is the average of all leaves weighted by the gating values on their path. The encoder tree takes the input and generates a lower dimensional representation in the leaves and the decoder tree takes this and reconstructs the original input. Exploiting t...
A best-first tree-searching approach for ML decoding in MIMO system
Shen, Chung-An
2012-07-28
In MIMO communication systems maximum-likelihood (ML) decoding can be formulated as a tree-searching problem. This paper presents a tree-searching approach that combines the features of classical depth-first and breadth-first approaches to achieve close to ML performance while minimizing the number of visited nodes. A detailed outline of the algorithm is given, including the required storage. The effects of storage size on BER performance and complexity in terms of search space are also studied. Our result demonstrates that with a proper choice of storage size the proposed method visits 40% fewer nodes than a sphere decoding algorithm at signal to noise ratio (SNR) = 20dB and by an order of magnitude at 0 dB SNR.
Tree compression with top trees
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;
2013-01-01
We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Tree compression with top trees
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;
2015-01-01
We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications
DEFF Research Database (Denmark)
Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua;
2015-01-01
Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......'' SSL algorithms which use binaural microphones for localization, MLSSL performs better using signals of one or more microphones placed on just one ear, thereby reducing the wireless transmission overhead of binaural hearing aids. More specifically, when the target location confined to the front...
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…
Can, Seda; van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influ
maxLik: A package for maximum likelihood estimation in R
DEFF Research Database (Denmark)
Henningsen, Arne; Toomet, Ott
2011-01-01
This paper describes the package maxLik for the statistical environment R. The package is essentially a unified wrapper interface to various optimization routines, offering easy access to likelihood-specific features like standard errors or information matrix equality (BHHH method). More advanced...
E. Waarts (Eric); M.A. Carree (Martin); B. Wierenga (Berend)
1991-01-01
textabstractThe authors build on the idea put forward by Shugan to infer product maps from scanning data. They demonstrate that the actual estimation procedure used by Shugan has several methodological problems and may yield unstable estimates. They propose an alternative estimation procedure, full-
Aicha Baya Goumeidane; Mohammed Khamadja; Nafaa Nacereddine
2011-01-01
This paper presents an adaptive probabilistic region-based deformable model using an explicit representation that aims to extract automatically defects from a radiographic film. To deal with the height computation cost of such model, an adaptive polygonal representation is used and the search space for the greedy-based model evolution is reduced. Furthermore, we adapt this explicit model to handle topological changes in presence of multiple defects.
Pilot power optimization for AF relaying using maximum likelihood channel estimation
Wang, Kezhi
2014-09-01
Bit error rates (BERs) for amplify-and-forward (AF) relaying systems with two different pilot-symbol-aided channel estimation methods, disintegrated channel estimation (DCE) and cascaded channel estimation (CCE), are derived in Rayleigh fading channels. Based on these BERs, the pilot powers at the source and at the relay are optimized when their total transmitting powers are fixed. Numerical results show that the optimized system has a better performance than other conventional nonoptimized allocation systems. They also show that the optimal pilot power in variable gain is nearly the same as that in fixed gain for similar system settings. andcopy; 2014 IEEE.
Maximum Likelihood Estimation of the Bivariate Logistic Regression with Threshold Parameters
Jungpin Wu; Chiung Wen Chang
2005-01-01
In discussing the relationship between the successful probability of a binary random variable and a certain set of explanatory variables, the logistic regression is popular in various fields, especially in medicine researches. For example, (1) using remedy methods to explain the probability of recovery from some disease; (2) building a prediction model for the e-commence customer buying behavior; (3) building a model to decide the cut-point of the life insurance sales hiring; (4) building a c...
Etienne, Rampal S.
2009-01-01
In a recent paper, I presented a sampling formula for species abundances from multiple samples according to the prevailing neutral model of biodiversity, but practical implementation for parameter estimation was only possible when these samples were from local communities that were assumed to be equ
Klein, Daniel; Zezula, Ivan
2015-01-01
The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design matri
Veberic, Darko
2011-01-01
We present a novel method for combining the analog and photon-counting measurements of lidar transient recorders into reconstructed photon returns. The method takes into account the statistical properties of the two measurement modes and estimates the most likely number of arriving photons and the most likely values of acquisition parameters describing the two measurement modes. It extends and improves the standard combining ("gluing") methods and does not rely on any ad hoc definitions of the overlap region nor on any ackground subtraction methods.
GooFit: A library for massively parallelising maximum-likelihood fits
International Nuclear Information System (INIS)
Fitting complicated models to large datasets is a bottleneck of many analyses. We present GooFit, a library and tool for constructing arbitrarily-complex probability density functions (PDFs) to be evaluated on nVidia GPUs or on multicore CPUs using OpenMP. The massive parallelisation of dividing up event calculations between hundreds of processors can achieve speedups of factors 200-300 in real-world problems.
Maximum-likelihood density modification using pattern recognition of structural motifs
International Nuclear Information System (INIS)
A likelihood-based density-modification method is extended to include pattern recognition of structural motifs. The likelihood-based approach to density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] is extended to include the recognition of patterns of electron density. Once a region of electron density in a map is recognized as corresponding to a known structural element, the likelihood of the map is reformulated to include a term that reflects how closely the map agrees with the expected density for that structural element. This likelihood is combined with other aspects of the likelihood of the map, including the presence of a flat solvent region and the electron-density distribution in the protein region. This likelihood-based pattern-recognition approach was tested using the recognition of helical segments in a largely helical protein. The pattern-recognition method yields a substantial phase improvement over both conventional and likelihood-based solvent-flattening and histogram-matching methods. The method can potentially be used to recognize any common structural motif and incorporate prior knowledge about that motif into density modification
Maximum likelihood positioning in the scintillation camera using depth of interaction
International Nuclear Information System (INIS)
The spatial (X and Y) dependence of the photomultiplier (PM) response in Anger gamma camera has been thoroughly described in the past. The light distribution to individual PM in gamma cameras--solid angle seen by each photocathode--being a truly three-dimensional problem, the depth of interaction (DOI) has to be included in the analysis of the PM output. Furthermore, DOI being a stochastic process, it has to be considered explicitly, on a event-by-event basis, while evaluating both position and energy. Specific effects of the DOI on the PM response have been quantified. The method was implemented and tested on a Monte Carlo simulator with special care to the noise modeling. Two models were developed, a first one considering only the geometric aspects of the camera and used for comparison, and a second one describing a more realistic camera environment. In a typical camera configuration and 140 keV photons, the DOI alone can account for a 6.4 mm discrepancy in position and 12% in energy between two scintillations. Variation of the DOI can still bring additional distortions when photons do not enter the crystal perpendicularly such as in slant hole, cone beam and other focusing collimators. With a 0.95 cm crystal and a 30 degree slant angle, the obliquity factor can be responsible for a 5.5 mm variation in the event position. Results indicate that both geometrical and stochastic effects of the DOI are definitely reducing the camera performances and should be included in the image formation process
A.H. Curiale (Ariel H.); G. Vegas-Sanchez-Ferrero (Gonzalo); J.G. Bosch (Johan); S. Aja-Fernández (Santiago)
2015-01-01
textabstractThe strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle b
Cortesi, A.; Merrifield, M. R.; Arnaboldi, M.; Gerhard, O.; Martinez-Valpuesta, I.; Saha, K.; Coccato, L.; Bamford, S.; Napolitano, N. R.; Das, P.; Douglas, N. G.; Romanowsky, A. J.; Kuijken, K.; Capaccioli, M.; Freeman, K. C.
2011-01-01
To investigate the origins of S0 galaxies, we present a new method of analysing their stellar kinematics from discrete tracers such as planetary nebulae. This method involves binning the data in the radial direction so as to extract the most general possible non-parametric kinematic profiles, and us
Estimation of Spatial Sample Selection Models : A Partial Maximum Likelihood Approach
Rabovic, Renata; Cizek, Pavel
2016-01-01
To analyze data obtained by non-random sampling in the presence of cross-sectional dependence, estimation of a sample selection model with a spatial lag of a latent dependent variable or a spatial error in both the selection and outcome equations is considered. Since there is no estimation framework
Maximum Likelihood Estimation in Latent Class Models For Contingency Table Data
Fienberg, S.E.; Hersh, P.; Rinaldo, A.; Zhou, Y
2007-01-01
Statistical models with latent structure have a history going back to the 1950s and have seen widespread use in the social sciences and, more recently, in computational biology and in machine learning. Here we study the basic latent class model proposed originally by the sociologist Paul F. Lazarfeld for categorical variables, and we explain its geometric structure. We draw parallels between the statistical and geometric properties of latent class models and we illustrate geometrically the ca...
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Efficient strategies for genome scanning using maximum-likelihood affected-sib-pair analysis
Energy Technology Data Exchange (ETDEWEB)
Holmans, P.; Craddock, N. [Univ. of Wales College of Medicine, Cardiff (United Kingdom)
1997-03-01
Detection of linkage with a systematic genome scan in nuclear families including an affected sibling pair is an important initial step on the path to cloning susceptibility genes for complex genetic disorders, and it is desirable to optimize the efficiency of such studies. The aim is to maximize power while simultaneously minimizing the total number of genotypings and probability of type I error. One approach to increase efficiency, which has been investigated by other workers, is grid tightening: a sample is initially typed using a coarse grid of markers, and promising results are followed up by use of a finer grid. Another approach, not previously considered in detail in the context of an affected-sib-pair genome scan for linkage, is sample splitting: a portion of the sample is typed in the screening stage, and promising results are followed up in the whole sample. In the current study, we have used computer simulation to investigate the relative efficiency of two-stage strategies involving combinations of both grid tightening and sample splitting and found that the optimal strategy incorporates both approaches. In general, typing half the sample of affected pairs with a coarse grid of markers in the screening stage is an efficient strategy under a variety of conditions. If Hardy-Weinberg equilibrium holds, it is most efficient not to type parents in the screening stage. If Hardy-Weinberg equilibrium does not hold (e.g., because of stratification) failure to type parents in the first stage increases the amount of genotyping required, although the overall probability of type I error is not greatly increased, provided the parents are used in the final analysis. 23 refs., 4 figs., 5 tabs.
A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2001-01-01
performance evaluation on in-vivo data further reveals that the number of highly deviating velocity estimates in the tissue parts of the RF-signals are reduced with the STC-MLE. In general the resulting profiles are continuous and more consistent with the true velocity profile, and the introduction of the...
Shterev, I.D.; Lagendijk, R.L.
2005-01-01
Quantization-based watermarking schemes comprise a class of watermarking schemes that achieves the channel capacity in terms of additive noise attacks.1 The existence of good high dimensional lattices that can be efficiently implemented2–4 and incorporated into watermarking structures, made quantiza
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power, trunca
Block Network Error Control Codes and Syndrome-based Complete Maximum Likelihood Decoding
Bahramgiri, Hossein
2008-01-01
In this paper, network error control coding is studied for robust and efficient multicast in a directed acyclic network with imperfect links. The block network error control coding framework, BNEC, is presented and the capability of the scheme to correct a mixture of symbol errors and packet erasures and to detect symbol errors is studied. The idea of syndrome-based decoding and error detection is introduced for BNEC, which removes the effect of input data and hence decreases the complexity. Next, an efficient three-stage syndrome-based BNEC decoding scheme for network error correction is proposed, in which prior to finding the error values, the position of the edge errors are identified based on the error spaces at the receivers. In addition to bounded-distance decoding schemes for error correction up to the refined Singleton bound, a complete decoding scheme for BNEC is also introduced. Specifically, it is shown that using the proposed syndrome-based complete decoding, a network error correcting code with r...
Maximum-Likelihood Sequence Detector for Dynamic Mode High Density Probe Storage
Kumar, Naveen; Ramamoorthy, Aditya; Salapaka, Murti
2009-01-01
There is an ever increasing need for storing data in smaller and smaller form factors driven by the ubiquitous use and increased demands of consumer electronics. A new approach of achieving a few Tb per in2 areal densities, utilizes a cantilever probe with a sharp tip that can be used to deform and assess the topography of the material. The information may be encoded by means of topographic profiles on a polymer medium. The prevalent mode of using the cantilever probe is the static mode that is known to be harsh on the probe and the media. In this paper, the high quality factor dynamic mode operation, which is known to be less harsh on the media and the probe, is analyzed for probe based high density data storage purposes. It is demonstrated that an appropriate level of abstraction is possible that obviates the need for an involved physical model. The read operation is modeled as a communication channel which incorporates the inherent system memory due to the intersymbol interference and the cantilever state ...
D. Molenaar; S. van der Sluis; D.I. Boomsma; C.V. Dolan
2012-01-01
Considerable effort has been devoted to the analysis of genotype by environment (G × E) interactions in various phenotypic domains, such as cognitive abilities and personality. In many studies, environmental variables were observed (measured) variables. In case of an unmeasured environment, van der
Improved Maximum Likelihood S-FSK Receiver for PLC Modem in AMR
Directory of Open Access Journals (Sweden)
Mohamed Chaker Bali
2012-01-01
Full Text Available This paper deals with an optimized software implementation of a narrowband power line modem. The modem is a node in automatic meter reading (AMR system compliant to IEC 61334-5-1 profile and operates in the CENELEC-A band. Because of the hostile communication environments of power line channel, a new design approach is carried out for an S-FSK demodulator capable of providing lower bit error rate (BER than standard specifications. The best compromise between efficiency and architecture complexity is investigated in this paper. Some implementation results are presented to show that a communication throughput of 9.6 kbps is reachable with the designed S-FSK modem.
3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm
Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia
2015-01-01
Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...
Estimating Water Demand in Urban Indonesia: A Maximum Likelihood Approach to block Rate Pricing Data
Rietveld, Piet; Rouwendal, Jan; Zwart, Bert
1997-01-01
In this paper the Burtless and Hausman model is used to estimate water demand in Salatiga, Indonesia. Other statistical models, as OLS and IV, are found to be inappropiate. A topic, which does not seem to appear in previous studies, is the fact that the density function of the loglikelihood can be m
DEFF Research Database (Denmark)
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard;
2008-01-01
that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables...
Identification and Mapping of Tree Species in Urban Areas Using WORLDVIEW-2 Imagery
Mustafa, Y. T.; Habeeb, H. N.; Stein, A.; Sulaiman, F. Y.
2015-10-01
Monitoring and mapping of urban trees are essential to provide urban forestry authorities with timely and consistent information. Modern techniques increasingly facilitate these tasks, but require the development of semi-automatic tree detection and classification methods. In this article, we propose an approach to delineate and map the crown of 15 tree species in the city of Duhok, Kurdistan Region of Iraq using WorldView-2 (WV-2) imagery. A tree crown object is identified first and is subsequently delineated as an image object (IO) using vegetation indices and texture measurements. Next, three classification methods: Maximum Likelihood, Neural Network, and Support Vector Machine were used to classify IOs using selected IO features. The best results are obtained with Support Vector Machine classification that gives the best map of urban tree species in Duhok. The overall accuracy was between 60.93% to 88.92% and κ-coefficient was between 0.57 to 0.75. We conclude that fifteen tree species were identified and mapped at a satisfactory accuracy in urban areas of this study.
Decoding with approximate channel statistics for band-limited nonlinear satellite channels
Biederman, L.; Omura, J. K.; Jain, P. C.
1981-11-01
Expressions for the cutoff rate of memoryless channels and certain channels with memory are derived assuming decoding with approximate channel statistics. For channels with memory, two different decoding techniques are examined: conventional decoders in conjunction with ideal interleaving/deinterleaving, and maximum likelihood decoders that take advantage of the channel memory. As a practical case of interest, the cutoff rate for the band-limited nonlinear satellite channel is evaluated where the modulation is assumed to be M-ary phase shift keying (MPSK). The channel nonlinearity is introduced by a limiter in cascade with a traveling wave tube amplifier (TWTA) at the satellite repeater while the channel memory is created by channel filters in the transmission path.
Species tree estimation for the late blight pathogen, Phytophthora infestans, and close relatives.
Directory of Open Access Journals (Sweden)
Jaime E Blair
Full Text Available To better understand the evolutionary history of a group of organisms, an accurate estimate of the species phylogeny must be known. Traditionally, gene trees have served as a proxy for the species tree, although it was acknowledged early on that these trees represented different evolutionary processes. Discordances among gene trees and between the gene trees and the species tree are also expected in closely related species that have rapidly diverged, due to processes such as the incomplete sorting of ancestral polymorphisms. Recently, methods have been developed for the explicit estimation of species trees, using information from multilocus gene trees while accommodating heterogeneity among them. Here we have used three distinct approaches to estimate the species tree for five Phytophthora pathogens, including P. infestans, the causal agent of late blight disease in potato and tomato. Our concatenation-based "supergene" approach was unable to resolve relationships even with data from both the nuclear and mitochondrial genomes, and from multiple isolates per species. Our multispecies coalescent approach using both Bayesian and maximum likelihood methods was able to estimate a moderately supported species tree showing a close relationship among P. infestans, P. andina, and P. ipomoeae. The topology of the species tree was also identical to the dominant phylogenetic history estimated in our third approach, Bayesian concordance analysis. Our results support previous suggestions that P. andina is a hybrid species, with P. infestans representing one parental lineage. The other parental lineage is not known, but represents an independent evolutionary lineage more closely related to P. ipomoeae. While all five species likely originated in the New World, further study is needed to determine when and under what conditions this hybridization event may have occurred.
Tolman, Marvin
2005-01-01
Students love outdoor activities and will love them even more when they build confidence in their tree identification and measurement skills. Through these activities, students will learn to identify the major characteristics of trees and discover how the pace--a nonstandard measuring unit--can be used to estimate not only distances but also the…
Diophantine approximation and badly approximable sets
DEFF Research Database (Denmark)
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
. The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension....... Applications of our general framework include those from number theory (classical, complex, p-adic and formal power series) and dynamical systems (iterated function schemes, rational maps and Kleinian groups)....
Elosua, Miguel
2013-01-01
Puxi's streets are lined with plane trees, especially in the former French Concession (and particularly in the Luwan and Xuhui districts). There are a few different varieties of plane tree, but the one found in Shanghai, is the hybrid platane hispanica. In China they are called French Plane trees (faguo wutong - 法国梧桐), for they were first planted along the Avenue Joffre (now Huai Hai lu - 淮海路) in 1902 by the French. Their life span is long, over a thousand years, and they may grow as high as ...
DEFF Research Database (Denmark)
Appelt, Ane L; Rønde, Heidi S
2013-01-01
The photo shows a close-up of a Lichtenberg figure – popularly called an “electron tree” – produced in a cylinder of polymethyl methacrylate (PMMA). Electron trees are created by irradiating a suitable insulating material, in this case PMMA, with an intense high energy electron beam. Upon discharge......, during dielectric breakdown in the material, the electrons generate branching chains of fractures on leaving the PMMA, producing the tree pattern seen. To be able to create electron trees with a clinical linear accelerator, one needs to access the primary electron beam used for photon treatments. We...... appropriated a linac that was being decommissioned in our department and dismantled the head to circumvent the target and ion chambers. This is one of 24 electron trees produced before we had to stop the fun and allow the rest of the accelerator to be disassembled....
Game tree algorithms and solution trees
W.H.L.M. Pijls (Wim); A. de Bruin (Arie)
1998-01-01
textabstractIn this paper, a theory of game tree algorithms is presented, entirely based upon the concept of solution tree. Two types of solution trees are distinguished: max and min trees. Every game tree algorithm tries to prune nodes as many as possible from the game tree. A cut-off criterion in
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan
2011-10-10
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm
Jin, Ick Hoon
2013-10-01
The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Interpreting Tree Ensembles with inTrees
Deng, Houtao
2014-01-01
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification an...
Evaluation of Gaussian approximations for data assimilation in reservoir models
Iglesias, Marco A.
2013-07-14
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our
Leike, Reimar H
2016-01-01
In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a ranking function that quantifies how "embarrassing" it is to communicate a given approximation. We show that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. We find that this ranking is equivalent to the Kullback-Leibler divergence that is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. We hope that our elementary derivation settles the apparent confusion. We show for example that when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many su...
Canfield, Elaine
2002-01-01
Describes a fifth-grade art activity that offers a new approach to creating pictures of Aspen trees. Explains that the students learned about art concepts, such as line and balance, in this lesson. Discusses the process in detail for creating the pictures. (CMK)
Bin Qin
2014-01-01
Relationships between fuzzy relations and fuzzy topologies are deeply researched. The concept of fuzzy approximating spaces is introduced and decision conditions that a fuzzy topological space is a fuzzy approximating space are obtained.
Rasin, A
1994-01-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Unimodular trees versus Einstein trees
Álvarez, Enrique; González-Martín, Sergio; Martín, Carmelo P.
2016-10-01
The maximally helicity violating tree-level scattering amplitudes involving three, four or five gravitons are worked out in Unimodular Gravity. They are found to coincide with the corresponding amplitudes in General Relativity. This a remarkable result, insofar as both the propagators and the vertices are quite different in the two theories.
Directory of Open Access Journals (Sweden)
Drew W Purves
Full Text Available BACKGROUND: Canopy structure, which can be defined as the sum of the sizes, shapes and relative placements of the tree crowns in a forest stand, is central to all aspects of forest ecology. But there is no accepted method for deriving canopy structure from the sizes, species and biomechanical properties of the individual trees in a stand. Any such method must capture the fact that trees are highly plastic in their growth, forming tessellating crown shapes that fill all or most of the canopy space. METHODOLOGY/PRINCIPAL FINDINGS: We introduce a new, simple and rapidly-implemented model--the Ideal Tree Distribution, ITD--with tree form (height allometry and crown shape, growth plasticity, and space-filling, at its core. The ITD predicts the canopy status (in or out of canopy, crown depth, and total and exposed crown area of the trees in a stand, given their species, sizes and potential crown shapes. We use maximum likelihood methods, in conjunction with data from over 100,000 trees taken from forests across the coterminous US, to estimate ITD model parameters for 250 North American tree species. With only two free parameters per species--one aggregate parameter to describe crown shape, and one parameter to set the so-called depth bias--the model captures between-species patterns in average canopy status, crown radius, and crown depth, and within-species means of these metrics vs stem diameter. The model also predicts much of the variation in these metrics for a tree of a given species and size, resulting solely from deterministic responses to variation in stand structure. CONCLUSIONS/SIGNIFICANCE: This new model, with parameters for US tree species, opens up new possibilities for understanding and modeling forest dynamics at local and regional scales, and may provide a new way to interpret remote sensing data of forest canopies, including LIDAR and aerial photography.
Approximation of distributed delays
Lu, Hao; Eberard, Damien; Simon, Jean-Pierre
2010-01-01
We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Visualizing Contour Trees within Histograms
DEFF Research Database (Denmark)
Kraus, Martin
2010-01-01
Many of the topological features of the isosurfaces of a scalar volume field can be compactly represented by its contour tree. Unfortunately, the contour trees of most real-world volume data sets are too complex to be visualized by dot-and-line diagrams. Therefore, we propose a new visualization...... that is suitable for large contour trees and efficiently conveys the topological structure of the most important isosurface components. This visualization is integrated into a histogram of the volume data; thus, it offers strictly more information than a traditional histogram. We present algorithms...... to automatically compute the graph layout and to calculate appropriate approximations of the contour tree and the surface area of the relevant isosurface components. The benefits of this new visualization are demonstrated with the help of several publicly available volume data sets....
The Impact of Missing Data on Species Tree Estimation.
Xi, Zhenxiang; Liu, Liang; Davis, Charles C
2016-03-01
Phylogeneticists are increasingly assembling genome-scale data sets that include hundreds of genes to resolve their focal clades. Although these data sets commonly include a moderate to high amount of missing data, there remains no consensus on their impact to species tree estimation. Here, using several simulated and empirical data sets, we assess the effects of missing data on species tree estimation under varying degrees of incomplete lineage sorting (ILS) and gene rate heterogeneity. We demonstrate that concatenation (RAxML), gene-tree-based coalescent (ASTRAL, MP-EST, and STAR), and supertree (matrix representation with parsimony [MRP]) methods perform reliably, so long as missing data are randomly distributed (by gene and/or by species) and that a sufficiently large number of genes are sampled. When data sets are indecisive sensu Sanderson et al. (2010. Phylogenomics with incomplete taxon coverage: the limits to inference. BMC Evol Biol. 10:155) and/or ILS is high, however, high amounts of missing data that are randomly distributed require exhaustive levels of gene sampling, likely exceeding most empirical studies to date. Moreover, missing data become especially problematic when they are nonrandomly distributed. We demonstrate that STAR produces inconsistent results when the amount of nonrandom missing data is high, regardless of the degree of ILS and gene rate heterogeneity. Similarly, concatenation methods using maximum likelihood can be misled by nonrandom missing data in the presence of gene rate heterogeneity, which becomes further exacerbated when combined with high ILS. In contrast, ASTRAL, MP-EST, and MRP are more robust under all of these scenarios. These results underscore the importance of understanding the influence of missing data in the phylogenomics era. PMID:26589995
Object based technique for delineating and mapping 15 tree species using VHR WorldView-2 imagery
Mustafa, Yaseen T.; Habeeb, Hindav N.
2014-10-01
Monitoring and analyzing forests and trees are required task to manage and establish a good plan for the forest sustainability. To achieve such a task, information and data collection of the trees are requested. The fastest way and relatively low cost technique is by using satellite remote sensing. In this study, we proposed an approach to identify and map 15 tree species in the Mangish sub-district, Kurdistan Region-Iraq. Image-objects (IOs) were used as the tree species mapping unit. This is achieved using the shadow index, normalized difference vegetation index and texture measurements. Four classification methods (Maximum Likelihood, Mahalanobis Distance, Neural Network, and Spectral Angel Mapper) were used to classify IOs using selected IO features derived from WorldView-2 imagery. Results showed that overall accuracy was increased 5-8% using the Neural Network method compared with other methods with a Kappa coefficient of 69%. This technique gives reasonable results of various tree species classifications by means of applying the Neural Network method with IOs techniques on WorldView-2 imagery.
Negative Tree Reweighted Belief Propagation
Liu, Qiang
2012-01-01
We introduce a new class of lower bounds on the log partition function of a Markov random field which makes use of a reversed Jensen's inequality. In particular, our method approximates the intractable distribution using a linear combination of spanning trees with negative weights. This technique is a lower-bound counterpart to the tree-reweighted belief propagation algorithm, which uses a convex combination of spanning trees with positive weights to provide corresponding upper bounds. We develop algorithms to optimize and tighten the lower bounds over the non-convex set of valid parameter values. Our algorithm generalizes mean field approaches (including naive and structured mean field approximations), which it includes as a limiting case.
TreeFam: a curated database of phylogenetic trees of animal gene families
DEFF Research Database (Denmark)
Li, Heng; Coghlan, Avril; Ruan, Jue;
2006-01-01
, based on seed alignments and trees in a similar fashion to Pfam. Release 1.1 of TreeFam contains curated trees for 690 families and automatically generated trees for another 11 646 families. These represent over 128 000 genes from nine fully sequenced animal genomes and over 45 000 other animal proteins......TreeFam is a database of phylogenetic trees of gene families found in animals. It aims to develop a curated resource that presents the accurate evolutionary history of all animal gene families, as well as reliable ortholog and paralog assignments. Curated families are being added progressively...... from UniProt; approximately 40-85% of proteins encoded in the fully sequenced animal genomes are included in TreeFam. TreeFam is freely available at http://www.treefam.org and http://treefam.genomics.org.cn. Udgivelsesdato: 2006-Jan-1...
Institute of Scientific and Technical Information of China (English)
YueShihong; ZhangKecun
2002-01-01
In a dot product space with the reproducing kernel (r. k. S. ) ,a fuzzy system with the estimation approximation errors is proposed ,which overcomes the defect that the existing fuzzy control system is difficult to estimate the errors of approximation for a desired function,and keeps the characteristics of fuzzy system as an inference approach. The structure of the new fuzzy approximator benefits a course got by other means.
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Finite Sholander Trees, Trees, and their Betweenness
Chvátal, Vašek; Schäfer, Philipp Matthias
2011-01-01
We provide a proof of Sholander's claim (Trees, lattices, order, and betweenness, Proc. Amer. Math. Soc. 3, 369-381 (1952)) concerning the representability of collections of so-called segments by trees, which yields a characterization of the interval function of a tree. Furthermore, we streamline Burigana's characterization (Tree representations of betweenness relations defined by intersection and inclusion, Mathematics and Social Sciences 185, 5-36 (2009)) of tree betweenness and provide a relatively short proof.
Expectation Consistent Approximate Inference
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
Approximate Modified Policy Iteration
Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu
2012-01-01
Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...
DEFF Research Database (Denmark)
Bahr, Patrick
2012-01-01
Tree automata are traditionally used to study properties of tree languages and tree transformations. In this paper, we consider tree automata as the basis for modular and extensible recursion schemes. We show, using well-known techniques, how to derive from standard tree automata highly modular r...
The Karlqvist approximation revisited
Tannous, C.
2015-01-01
The Karlqvist approximation signaling the historical beginning of magnetic recording head theory is reviewed and compared to various approaches progressing from Green, Fourier, Conformal mapping that obeys the Sommerfeld edge condition at angular points and leads to exact results.
Approximations in Inspection Planning
DEFF Research Database (Denmark)
Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.;
2000-01-01
Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....
Directory of Open Access Journals (Sweden)
Malvina Baica
1985-01-01
Full Text Available The author uses a new modification of Jacobi-Perron Algorithm which holds for complex fields of any degree (abbr. ACF, and defines it as Generalized Euclidean Algorithm (abbr. GEA to approximate irrationals.
Approximation Behooves Calibration
DEFF Research Database (Denmark)
da Silva Ribeiro, André Manuel; Poulsen, Rolf
2013-01-01
Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Directory of Open Access Journals (Sweden)
Jose Javier Gorgoso-Varela
2016-04-01
Full Text Available Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights.Area of study: North-West of Spain.Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill. stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution.Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic.Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass.
Calderon, Christopher P.; Janosi, Lorant; Kosztin, Ioan
2009-04-01
We demonstrate how the surrogate process approximation (SPA) method can be used to compute both the potential of mean force along a reaction coordinate and the associated diffusion coefficient using a relatively small number (10-20) of bidirectional nonequilibrium trajectories coming from a complex system. Our method provides confidence bands which take the variability of the initial configuration of the high-dimensional system, continuous nature of the work paths, and thermal fluctuations into account. Maximum-likelihood-type methods are used to estimate a stochastic differential equation (SDE) approximating the dynamics. For each observed time series, we estimate a new SDE resulting in a collection of SPA models. The physical significance of the collection of SPA models is discussed and methods for exploiting information in the population of estimated SPA models are demonstrated and suggested. Molecular dynamics simulations of potassium ion dynamics inside a gramicidin A channel are used to demonstrate the methodology, although SPA-type modeling has also proven useful in analyzing single-molecule experimental time series [J. Phys. Chem. B 113, 118 (2009)].
Analysis of Logic Programs Using Regular Tree Languages
DEFF Research Database (Denmark)
Gallagher, John Patrick
2012-01-01
The eld of nite tree automata provides fundamental notations and tools for reasoning about set of terms called regular or recognizable tree languages. We consider two kinds of analysis using regular tree languages, applied to logic programs. The rst approach is to try to discover automatically...... a tree automaton from a logic program, approximating its minimal Herbrand model. In this case the input for the analysis is a program, and the output is a tree automaton. The second approach is to expose or check properties of the program that can be expressed by a given tree automaton. The input...
On algorithm for building of optimal α-decision trees
Alkhalid, Abdulaziz
2010-01-01
The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.
Bronchi, Bronchial Tree, & Lungs
... specific Modules Resources Archived Modules Updates Bronchi, Bronchial Tree, & Lungs Bronchi and Bronchial Tree In the mediastinum , at the level of the ... trachea. As the branching continues through the bronchial tree, the amount of hyaline cartilage in the walls ...
A Practical Algorithm for the Minimum Rectilinear Steiner Tree
Institute of Scientific and Technical Information of China (English)
MA Jun; YANG Bo; MA Shaohan
2000-01-01
An O(n2) time approximation algorithm for the minimum rectilinear Steiner tree is proposed. The approximation ratio of the algorithm is strictly less than 1.5. The computing performances show the costs of the spanning trees produced by the algorithm are only 0.8% away from the optimal ones.
Greedy algorithm with weights for decision tree construction
Moshkov, Mikhail
2010-12-01
An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.
Double unresolved approximations to multiparton scattering amplitudes
International Nuclear Information System (INIS)
We present approximations to tree-level multiparton scattering amplitudes which are appropriate when two partons are unresolved. These approximations are required for the analytic isolation of infrared singularities of n+2 parton scattering processes contributing to the next-to-next-to-leading order corrections to n jet cross sections. In each case the colour ordered matrix elements factorise and yield a function containing the singular factors multiplying the n-parton amplitudes. When the unresolved particles are colour unconnected, the approximations are simple products of the familiar eikonal and Altarelli-Parisi splitting functions used to describe single unresolved emission. However, when the unresolved particles are colour connected the factorisation is more complicated and we introduce new and general functions to describe the triple collinear and soft/collinear limits in addition to the known double soft gluon limits of Berends and Giele. As expected the triple collinear splitting functions obey an N=1 SUSY identity. To illustrate the use of these double unresolved approximations, we have examined the singular limits of the tree-level matrix elements for e+e- →5 partons when only three partons are resolved. When integrated over the unresolved regions of phase space, these expressions will be of use in evaluating the O(αs3) corrections to the three-jet rate in electron-positron annihilation. (orig.)
Diophantine approximations on fractals
Einsiedler, Manfred; Shapira, Uri
2009-01-01
We exploit dynamical properties of diagonal actions to derive results in Diophantine approximations. In particular, we prove that the continued fraction expansion of almost any point on the middle third Cantor set (with respect to the natural measure) contains all finite patterns (hence is well approximable). Similarly, we show that for a variety of fractals in [0,1]^2, possessing some symmetry, almost any point is not Dirichlet improvable (hence is well approximable) and has property C (after Cassels). We then settle by similar methods a conjecture of M. Boshernitzan saying that there are no irrational numbers x in the unit interval such that the continued fraction expansions of {nx mod1 : n is a natural number} are uniformly eventually bounded.
HapTree: a novel Bayesian framework for single individual polyplotyping using NGS data.
Directory of Open Access Journals (Sweden)
Emily Berger
2014-03-01
Full Text Available As the more recent next-generation sequencing (NGS technologies provide longer read sequences, the use of sequencing datasets for complete haplotype phasing is fast becoming a reality, allowing haplotype reconstruction of a single sequenced genome. Nearly all previous haplotype reconstruction studies have focused on diploid genomes and are rarely scalable to genomes with higher ploidy. Yet computational investigations into polyploid genomes carry great importance, impacting plant, yeast and fish genomics, as well as the studies of the evolution of modern-day eukaryotes and (epigenetic interactions between copies of genes. In this paper, we describe a novel maximum-likelihood estimation framework, HapTree, for polyploid haplotype assembly of an individual genome using NGS read datasets. We evaluate the performance of HapTree on simulated polyploid sequencing read data modeled after Illumina sequencing technologies. For triploid and higher ploidy genomes, we demonstrate that HapTree substantially improves haplotype assembly accuracy and efficiency over the state-of-the-art; moreover, HapTree is the first scalable polyplotyping method for higher ploidy. As a proof of concept, we also test our method on real sequencing data from NA12878 (1000 Genomes Project and evaluate the quality of assembled haplotypes with respect to trio-based diplotype annotation as the ground truth. The results indicate that HapTree significantly improves the switch accuracy within phased haplotype blocks as compared to existing haplotype assembly methods, while producing comparable minimum error correction (MEC values. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Institute of Scientific and Technical Information of China (English)
Hui-Yong Jiang; Zhong-Xi Huang; Xue-Feng Zhang; Richard Desper; Tong Zhao
2007-01-01
AIM: To construct tree models for classification of diffuse large B-cell lymphomas (DLBCL) by chromosome copy numbers, to compare them with cDNA microarray classification, and to explore models of multi-gene, multi-step and multi-pathway processes of DLBCL tumorigenesis.METHODS: Maximum-weight branching and distance based models were constructed based on the comparative genomic hybridization (CGH) data of 123 DLBCL samples using the established methods and software of Desper et al. A maximum likelihood tree model was also used to analyze the data. By comparing with the results reported in literature, values of tree models in the classification of DLBCL were elucidated.RESULTS: Both the branching and the distance-based trees classified DLBCL into three groups. We combined the classification methods of the two models and classified DLBCL into three categories according to their characteristics. The first group was marked by +Xq, +Xp, -17p and +13q; the second group by +3q, +18q and +18p; and the third group was marked by -6q and +6p. This chromosomal classification was consistent with cDNA classification. It indicated that -6q and +3q were two main events in the tumorigenesis of lymphoma.CONCLUSION: Tree models of lymphoma established from CGH data can be used in the classification of DLBCL. These models can suggest multi-gene, multi-step and multi-pathway processes of tumorigenesis.Two pathways, -6q preceding +6q and +3q preceding +18q, may be important in understanding tumorigenesis of DLBCL. The pathway, -6q preceding +6q, may have a close relationship with the tumorigenesis of non-GCB DLBCL.
Approximate and Incomplete Factorizations
Chan, T.F.; Vorst, H.A. van der
2001-01-01
In this chapter, we give a brief overview of a particular class of preconditioners known as incomplete factorizations. They can be thought of as approximating the exact LU factorization of a given matrix A (e.g. computed via Gaussian elimination) by disallowing certain ll-ins. As opposed to other PD
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
White, Martin
2014-01-01
This year marks the 100th anniversary of the birth of Yakov Zel'dovich. Amongst his many legacies is the Zel'dovich approximation for the growth of large-scale structure, which remains one of the most successful and insightful analytic models of structure formation. We use the Zel'dovich approximation to compute the two-point function of the matter and biased tracers, and compare to the results of N-body simulations and other Lagrangian perturbation theories. We show that Lagrangian perturbation theories converge well and that the Zel'dovich approximation provides a good fit to the N-body results except for the quadrupole moment of the halo correlation function. We extend the calculation of halo bias to 3rd order and also consider non-local biasing schemes, none of which remove the discrepancy. We argue that a part of the discrepancy owes to an incorrect prediction of inter-halo velocity correlations. We use the Zel'dovich approximation to compute the ingredients of the Gaussian streaming model and show that ...
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg
2005-01-01
The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM that...
Vogt, Peter R.
2004-09-01
Nature often replicates her processes at different scales of space and time in differing media. Here a tree-trunk cross section I am preparing for a dendrochronological display at the Battle Creek Cypress Swamp Nature Sanctuary (Calvert County, Maryland) dried and cracked in a way that replicates practically all the planform features found along the Mid-Oceanic Ridge (see Figure 1). The left-lateral offset of saw marks, contrasting with the right-lateral ``rift'' offset, even illustrates the distinction between transcurrent (strike-slip) and transform faults, the latter only recognized as a geologic feature, by J. Tuzo Wilson, in 1965. However, wood cracking is but one of many examples of natural processes that replicate one or several elements of lithospheric plate tectonics. Many of these examples occur in everyday venues and thus make great teaching aids, ``teachable'' from primary school to university levels. Plate tectonics, the dominant process of Earth geology, also occurs in miniature on the surface of some lava lakes, and as ``ice plate tectonics'' on our frozen seas and lakes. Ice tectonics also happens at larger spatial and temporal scales on the Jovian moons Europa and perhaps Ganymede. Tabletop plate tectonics, in which a molten-paraffin ``asthenosphere'' is surfaced by a skin of congealing wax ``plates,'' first replicated Mid-Oceanic Ridge type seafloor spreading more than three decades ago. A seismologist (J. Brune, personal communication, 2004) discovered wax plate tectonics by casually and serendipitously pulling a stick across a container of molten wax his wife and daughters had used in making candles. Brune and his student D. Oldenburg followed up and mirabile dictu published the results in Science (178, 301-304).
Directory of Open Access Journals (Sweden)
Petras Rupšys
2015-01-01
Full Text Available A stochastic modeling approach based on the Bertalanffy law gained interest due to its ability to produce more accurate results than the deterministic approaches. We examine tree crown width dynamic with the Bertalanffy type stochastic differential equation (SDE and mixed-effects parameters. In this study, we demonstrate how this simple model can be used to calculate predictions of crown width. We propose a parameter estimation method and computational guidelines. The primary goal of the study was to estimate the parameters by considering discrete sampling of the diameter at breast height and crown width and by using maximum likelihood procedure. Performance statistics for the crown width equation include statistical indexes and analysis of residuals. We use data provided by the Lithuanian National Forest Inventory from Scots pine trees to illustrate issues of our modeling technique. Comparison of the predicted crown width values of mixed-effects parameters model with those obtained using fixed-effects parameters model demonstrates the predictive power of the stochastic differential equations model with mixed-effects parameters. All results were implemented in a symbolic algebra system MAPLE.
Numerics of implied binomial trees
Härdle, Wolfgang Karl; Myšičková, Alena
2008-01-01
Market option prices in last 20 years confirmed deviations from the Black and Scholes (BS) models assumptions, especially on the BS implied volatility. Implied binomial trees (IBT) models capture the variations of the implied volatility known as \\volatility smile". They provide a discrete approximation to the continuous risk neutral process for the underlying assets. In this paper, we describe the numerical construction of IBTs by Derman and Kani (DK) and an alternative method by Barle and Ca...
Microwave sensing of tree trunks
Jezova, Jana; Mertens, Laurence; Lambot, Sebastien
2015-04-01
was divided into three sections to separate parts with different moisture (heartwood and sapwood) or empty space (decays). For easier manipulation with the antenna we developed a special ruler for measuring the distance along the scans. Instead of the surveying wheel we read the distance with a camera, which was fixed on the antenna and focused on the ruler with a binary pattern. Hence, during whole measurement and the data processing we were able to identify an accurate position on the tree in view of the scan. Some preliminary measurements on the trees were also conducted. They were performed using a GSSI 900 MHz antenna. Several tree species (beech, horse-chestnut, birch, ...) in Louvain-la-Neuve and Brussels, Belgium, have been investigated to see the internal structure of the tree decays. The measurements were carried out mainly by circumferential measurement around the trunk and also by vertical measurement along the trunk for approximate detection of the cavity. The comparison between the numerical simulations, simplified tree trunk model and real data from trees is presented. This research is funded by the Fonds de la Recherche Scientifique (FNRS, Belgium) and benefits from networking activities carried out within the EU COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Healthy trees are important to us all. Trees provide shade, beauty, and homes for wildlife. Trees give us products like paper and wood. Trees can give us all this only if they are healthy.They must be well cared for to remain healthy.
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Keim, Daniel A.; Bustos Cárdenas, Benjamin Eugenio; Berchtold, Stefan; Kriegel, Hans-Peter
2008-01-01
The X-tree (eXtended node tree) [1] is a spatial access method [2] that supports efficient query processing for high-dimensional data. It supports not only point data but also extended spatial data. The X-tree provides overlap-free split whenever it is possible without allowing the tree to degenerate; otherwise, the X-tree uses extended variable size directory nodes, so-called supernodes. The X-tree may be seen as a hybrid of a linear array-like and a hierarchical R-tree-like directory.
Uniform Stability of a Particle Approximation of the Optimal Filter Derivative
Del Moral, Pierre; Singh, Sumeetpal
2011-01-01
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance charact...
Energy Technology Data Exchange (ETDEWEB)
Chalasani, P.; Saias, I. [Los Alamos National Lab., NM (United States); Jha, S. [Carnegie Mellon Univ., Pittsburgh, PA (United States)
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Approximations to Euler's constant
International Nuclear Information System (INIS)
We study a problem of finding good approximations to Euler's constant γ=lim→∞ Sn, where Sn = Σk=Ln (1)/k-log(n+1), by linear forms in logarithms and harmonic numbers. In 1995, C. Elsner showed that slow convergence of the sequence Sn can be significantly improved if Sn is replaced by linear combinations of Sn with integer coefficients. In this paper, considering more general linear transformations of the sequence Sn we establish new accelerating convergence formulae for γ. Our estimates sharpen and generalize recent Elsner's, Rivoal's and author's results. (author)
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Arendonk, van J.A.M.; Tier, B.; Bink, M.C.A.M.; Bovenhuis, H.
1998-01-01
REML for the estimation of location and variance of a single quantitative trait locus, together with polygenic and residual variance, is described for the analysis of a granddaughter design. The method is based on a mixed linear model that includes the allelic effects of the quantitative trait locus
Calhoun, C. A.
1989-01-01
Despite the large number of models devoted to the statistical analysis of censored data, relatively little attention has been given to the case of censored discrete outcomes. In this paper, the author presents a technical description and user's guide to a computer program for estimating bivariate ordered-probit models for censored and uncensored data. The model and program are currently being applied in an analysis of World Fertility Survey data for Europe and the United States, and the resul...
Simons, Frederik J
2012-01-01
Topography and gravity are geophysical fields whose joint statistical structure derives from interface-loading processes modulated by the underlying mechanics of isostatic and flexural compensation in the shallow lithosphere. Under this dual statistical-mechanistic viewpoint an estimation problem can be formulated where the knowns are topography and gravity and the principal unknown the elastic flexural rigidity of the lithosphere. In the guise of an equivalent "effective elastic thickness", this important, geographically varying, structural parameter has been the subject of many interpretative studies, but precisely how well it is known or how best it can be found from the data, abundant nonetheless, has remained contentious and unresolved throughout the last few decades of dedicated study. The popular methods whereby admittance or coherence, both spectral measures of the relation between gravity and topography, are inverted for the flexural rigidity, have revealed themselves to have insufficient power to in...
West, Anthony C. F.; Novakowski, Kent S.; Gazor, Saeed
2006-06-01
We propose a new method to estimate the transmissivities of bedrock fractures from transmissivities measured in intervals of fixed length along a borehole. We define the scale of a fracture set by the inverse of the density of the Poisson point process assumed to represent their locations along the borehole wall, and we assume a lognormal distribution for their transmissivities. The parameters of the latter distribution are estimated by maximizing the likelihood of a left-censored subset of the data where the degree of censorship depends on the scale of the considered fracture set. We applied the method to sets of interval transmissivities simulated by summing random fracture transmissivities drawn from a specified population. We found the estimated distributions compared well to the transmissivity distributions of similarly scaled subsets of the most transmissive fractures from among the specified population. Estimation accuracy was most sensitive to the variance in the transmissivities of the fracture population. Using the proposed method, we estimated the transmissivities of fractures at increasing scale from hydraulic test data collected at a fixed scale in Smithville, Ontario, Canada. This is an important advancement since the resultant curves of transmissivity parameters versus fracture set scale would only previously have been obtainable from hydraulic tests conducted with increasing test interval length and with degrading equipment precision. Finally, on the basis of the properties of the proposed method, we propose guidelines for the design of fixed interval length hydraulic testing programs that require minimal prior knowledge of the rock.
Directory of Open Access Journals (Sweden)
Chien-Chung Chen
Full Text Available Perceived depth is conveyed by multiple cues, including binocular disparity and luminance shading. Depth perception from luminance shading information depends on the perceptual assumption for the incident light, which has been shown to default to a diffuse illumination assumption. We focus on the case of sinusoidally corrugated surfaces to ask how shading and disparity cues combine defined by the joint luminance gradients and intrinsic disparity modulation that would occur in viewing the physical corrugation of a uniform surface under diffuse illumination. Such surfaces were simulated with a sinusoidal luminance modulation (0.26 or 1.8 cy/deg, contrast 20%-80% modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency, with disparity amplitudes ranging from 0'-20'. The observers' task was to adjust the binocular disparity of a comparison random-dot stereogram surface to match the perceived depth of the joint luminance/disparity-modulated corrugation target. Regardless of target spatial frequency, the perceived target depth increased with the luminance contrast and depended on luminance phase but was largely unaffected by the luminance disparity modulation. These results validate the idea that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction. For depth judgments with combined cues, the observers gave much greater weighting to the luminance shading than to the disparity modulation of the targets. The results were not well-fit by a Bayesian cue-combination model weighted in proportion to the variance of the measurements for each cue in isolation. Instead, they suggest that the visual system uses disjunctive mechanisms to process these two types of information rather than combining them according to their likelihood ratios.
Kyungsoo Kim; Sung-Ho Lim; Jaeseok Lee; Won-Seok Kang; Cheil Moon; Ji-Woong Choi
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of...
Directory of Open Access Journals (Sweden)
Louis de Grange
2010-09-01
Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.
Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W
2016-04-01
We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration.
Kreif, N.; Gruber, S.; Radice, Rosalba; Grieve, R; J S Sekhon
2014-01-01
Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maxi...
Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed
2016-07-01
Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. PMID:27038887
James O Lloyd-Smith
2007-01-01
Background. The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, κ. A substantial literature exists on the estimation of κ, but most attention has focused on datasets that are not highly overdispersed (i.e., those with κ≥1), and the accuracy of confidence intervals estimated for κ is typically not explored. Methodology. This article presents a simulation study explo...
Aartsen, M G; Ackermann, M; Adams, J; Aguilar, J A; Ahlers, M; Ahrens, M; Altmann, D; Anderson, T; Archinger, M; Arguelles, C; Arlen, T C; Auffenberg, J; Bai, X; Barwick, S W; Baum, V; Bay, R; Beatty, J J; Tjus, J Becker; Becker, K -H; Beiser, E; BenZvi, S; Berghaus, P; Berley, D; Bernardini, E; Bernhard, A; Besson, D Z; Binder, G; Bindig, D; Bissok, M; Blaufuss, E; Blumenthal, J; Boersma, D J; Bohm, C; Börner, M; Bos, F; Bose, D; Böser, S; Botner, O; Braun, J; Brayeur, L; Bretz, H -P; Brown, A M; Buzinsky, N; Casey, J; Casier, M; Cheung, E; Chirkin, D; Christov, A; Christy, B; Clark, K; Classen, L; Coenders, S; Cowen, D F; Silva, A H Cruz; Daughhetee, J; Davis, J C; Day, M; de André, J P A M; De Clercq, C; Dembinski, H; De Ridder, S; Desiati, P; de Vries, K D; de Wasseige, G; de With, M; DeYoung, T; Díaz-Vélez, J C; Dumm, J P; Dunkman, M; Eagan, R; Eberhardt, B; Ehrhardt, T; Eichmann, B; Euler, S; Evenson, P A; Fadiran, O; Fahey, S; Fazely, A R; Fedynitch, A; Feintzeig, J; Felde, J; Filimonov, K; Finley, C; Fischer-Wasels, T; Flis, S; Fuchs, T; Gaisser, T K; Gaior, R; Gallagher, J; Gerhardt, L; Ghorbani, K; Gier, D; Gladstone, L; Glagla, M; Glüsenkamp, T; Goldschmidt, A; Golup, G; Gonzalez, J G; Goodman, J A; Góra, D; Grant, D; Gretskov, P; Groh, J C; Groß, A; Ha, C; Haack, C; Ismail, A Haj; Hallgren, A; Halzen, F; Hansmann, B; Hanson, K; Hebecker, D; Heereman, D; Helbing, K; Hellauer, R; Hellwig, D; Hickford, S; Hignight, J; Hill, G C; Hoffman, K D; Hoffmann, R; Holzapfel, K; Homeier, A; Hoshina, K; Huang, F; Huber, M; Huelsnitz, W; Hulth, P O; Hultqvist, K; In, S; Ishihara, A; Jacobi, E; Japaridze, G S; Jero, K; Jurkovic, M; Kaminsky, B; Kappes, A; Karg, T; Karle, A; Kauer, M; Keivani, A; Kelley, J L; Kemp, J; Kheirandish, A; Kiryluk, J; Kläs, J; Klein, S R; Kohnen, G; Kolanoski, H; Konietz, R; Koob, A; Köpke, L; Kopper, C; Kopper, S; Koskinen, D J; Kowalski, M; Krings, K; Kroll, G; Kroll, M; Kunnen, J; Kurahashi, N; Kuwabara, T; Labare, M; Lanfranchi, J L; Larson, M J; Lesiak-Bzdak, M; Leuermann, M; Leuner, J; Lünemann, J; Madsen, J; Maggi, G; Mahn, K B M; Maruyama, R; Mase, K; Matis, H S; Maunu, R; McNally, F; Meagher, K; Medici, M; Meli, A; Menne, T; Merino, G; Meures, T; Miarecki, S; Middell, E; Middlemas, E; Miller, J; Mohrmann, L; Montaruli, T; Morse, R; Nahnhauer, R; Naumann, U; Niederhausen, H; Nowicki, S C; Nygren, D R; Obertacke, A; Olivas, A; Omairat, A; O'Murchadha, A; Palczewski, T; Paul, L; Pepper, J A; Heros, C Pérez de los; Pfendner, C; Pieloth, D; Pinat, E; Posselt, J; Price, P B; Przybylski, G T; Pütz, J; Quinnan, M; Rädel, L; Rameez, M; Rawlins, K; Redl, P; Reimann, R; Relich, M; Resconi, E; Rhode, W; Richman, M; Richter, S; Riedel, B; Robertson, S; Rongen, M; Rott, C; Ruhe, T; Ruzybayev, B; Ryckbosch, D; Saba, S M; Sabbatini, L; Sander, H -G; Sandrock, A; Sandroos, J; Sarkar, S; Schatto, K; Scheriau, F; Schimp, M; Schmidt, T; Schmitz, M; Schoenen, S; Schöneberg, S; Schönwald, A; Schukraft, A; Schulte, L; Seckel, D; Seunarine, S; Shanidze, R; Smith, M W E; Soldin, D; Spiczak, G M; Spiering, C; Stahlberg, M; Stamatikos, M; Stanev, T; Stanisha, N A; Stasik, A; Stezelberger, T; Stokstad, R G; Stößl, A; Strahler, E A; Ström, R; Strotjohann, N L; Sullivan, G W; Sutherland, M; Taavola, H; Taboada, I; Ter-Antonyan, S; Terliuk, A; Tešić, G; Tilav, S; Toale, P A; Tobin, M N; Tosi, D; Tselengidou, M; Unger, E; Usner, M; Vallecorsa, S; Vandenbroucke, J; van Eijndhoven, N; Vanheule, S; van Santen, J; Veenkamp, J; Vehring, M; Voge, M; Vraeghe, M; Walck, C; Wallace, A; Wallraff, M; Wandkowsky, N; Weaver, C; Wendt, C; Westerhoff, S; Whelan, B J; Whitehorn, N; Wichary, C; Wiebe, K; Wiebusch, C H; Wille, L; Williams, D R; Wissing, H; Wolf, M; Wood, T R; Woschnagg, K; Xu, D L; Xu, X W; Xu, Y; Yanez, J P; Yodh, G; Yoshida, S; Zarzhitsky, P; Zoll, M
2015-01-01
Evidence for an extraterrestrial flux of high-energy neutrinos has now been found in multiple searches with the IceCube detector. The first solid evidence was provided by a search for neutrino events with deposited energies $\\gtrsim30$~TeV and interaction vertices inside the instrumented volume. Recent analyses suggest that the extraterrestrial flux extends to lower energies and is also visible with throughgoing, $\
Buot, Max-Louis G.; Hosten, Serkan; Richards, Donald St. P.
2007-01-01
Let $\\mu$ be a $p$-dimensional vector, and let $\\Sigma_1$ and $\\Sigma_2$ be $p \\times p$ positive definite covariance matrices. On being given random samples of sizes $N_1$ and $N_2$ from independent multivariate normal populations $N_p(\\mu,\\Sigma_1)$ and $N_p(\\mu,\\Sigma_2)$, respectively, the Behrens-Fisher problem is to solve the likelihood equations for estimating the unknown parameters $\\mu$, $\\Sigma_1$, and $\\Sigma_2$. We shall prove that for $N_1, N_2 > p$ there are, almost surely, exac...
Lee, Sik-Yum; Xia, Ye-Mao
2006-01-01
By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…
A well-resolved phylogeny of the trees of Puerto Rico based on DNA barcode sequence data.
Directory of Open Access Journals (Sweden)
Robert Muscarella
Full Text Available The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i low terminal phylogenetic resolution and (ii arbitrarily defined species pools.We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%. We used a maximum likelihood (ML approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%. We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types changed some of our results depending on which phylogeny (ML vs. Phylomatic was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny.With the DNA barcode phylogeny presented here (based on an island-wide species pool, we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i facilitate stronger inferences about the role of historical processes in governing the assembly and composition of Puerto Rican forests, (ii provide insight into
A Well-Resolved Phylogeny of the Trees of Puerto Rico Based on DNA Barcode Sequence Data
Muscarella, Robert; Uriarte, María; Erickson, David L.; Swenson, Nathan G.; Zimmerman, Jess K.; Kress, W. John
2014-01-01
Background The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i) low terminal phylogenetic resolution and (ii) arbitrarily defined species pools. Methodology/principal findings We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA) to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%). We used a maximum likelihood (ML) approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%). We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types) changed some of our results depending on which phylogeny (ML vs. Phylomatic) was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny. Conclusions/significance With the DNA barcode phylogeny presented here (based on an island-wide species pool), we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i) facilitate stronger inferences about the role of historical processes in governing the assembly and
International Nuclear Information System (INIS)
This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
The Compact Approximation Property does not imply the Approximation Property
Willis, George A.
1992-01-01
It is shown how to construct, given a Banach space which does not have the approximation property, another Banach space which does not have the approximation property but which does have the compact approximation property.
Efficient exploration of the space of reconciled gene trees.
Szöllõsi, Gergely J; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent
2013-11-01
Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree-species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree-species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source
DEFF Research Database (Denmark)
Baumbach, Jan; Guo, Jiong; Ibragimov, Rashid
2015-01-01
We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting...... tree is isomorphic to T? We prove that in the general setting, CST is NP-complete, which implies that the tree edit distance considered here is also NP-hard, even when both input trees having diameters bounded by 10. We also show that, when the number of distinct stars is bounded by a constant k, CTS...
DEFF Research Database (Denmark)
Baumbach, Jan; Guo, Jian-Ying; Ibragimov, Rashid
2013-01-01
We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting...... tree is isomorphic to T? We prove that in the general setting, CST is NP-complete, which implies that the tree edit distance considered here is also NP-hard, even when both input trees having diameters bounded by 10. We also show that, when the number of distinct stars is bounded by a constant k, CTS...
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Bethe-Peierls approximation and the inverse Ising model
Nguyen, H. Chau; Berg, Johannes
2011-01-01
We apply the Bethe-Peierls approximation to the problem of the inverse Ising model and show how the linear response relation leads to a simple method to reconstruct couplings and fields of the Ising model. This reconstruction is exact on tree graphs, yet its computational expense is comparable to other mean-field methods. We compare the performance of this method to the independent-pair, naive mean- field, Thouless-Anderson-Palmer approximations, the Sessak-Monasson expansion, and susceptibil...
Interacting boson approximation
International Nuclear Information System (INIS)
Lectures notes on the Interacting Boson Approximation are given. Topics include: angular momentum tensors; properties of T/sub i//sup (n)/ matrices; T/sub i//sup (n)/ matrices as Clebsch-Gordan coefficients; construction of higher rank tensors; normalization: trace of products of two s-rank tensors; completeness relation; algebra of U(N); eigenvalue of the quadratic Casimir operator for U(3); general result for U(N); angular momentum content of U(3) representation; p-Boson model; Hamiltonian; quadrupole transitions; S,P Boson model; expectation value of dipole operator; S-D model: U(6); quadratic Casimir operator; an O(5) subgroup; an O(6) subgroup; properties of O(5) representations; quadratic Casimir operator; quadratic Casimir operator for U(6); decomposition via SU(5) chain; a special O(3) decomposition of SU(3); useful identities; a useful property of D/sub αβγ/(α,β,γ = 4-8) as coupling coefficients; explicit construction of T/sub x//sup (2)/ and d/sub αβγ/; D-coefficients; eigenstates of T3; and summary of T = 2 states
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Pemantle, Robin
2004-01-01
There are several good reasons you might want to read about uniform spanning trees, one being that spanning trees are useful combinatorial objects. Not only are they fundamental in algebraic graph theory and combinatorial geometry, but they predate both of these subjects, having been used by Kirchoff in the study of resistor networks. This article addresses the question about spanning trees most natural to anyone in probability theory, namely what does a typical spanning tree look like?
Coded Splitting Tree Protocols
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar
2013-01-01
This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....
Sweeney, Debra; Rounds, Judy
2011-01-01
Trees are great inspiration for artists. Many art teachers find themselves inspired and maybe somewhat obsessed with the natural beauty and elegance of the lofty tree, and how it changes through the seasons. One such tree that grows in several regions and always looks magnificent, regardless of the time of year, is the birch. In this article, the…
DEFF Research Database (Denmark)
Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.
cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....
Brooks, Sarah DeWitt
2010-01-01
This article describes the author's experience in implementing a Wish Tree project in her school in an effort to bring the school community together with a positive art-making experience during a potentially stressful time. The concept of a wish tree is simple: plant a tree; provide tags and pencils for writing wishes; and encourage everyone to…
Constant factor approximation to the bounded genus instances of ATSP
Gharan, Shayan Oveis
2009-01-01
We give a constant factor approximation algorithm for the asymmetric traveling salesman problem when the underlying undirected graph of the Held-Karp linear programming relaxation of the problem has orientable bounded genus. Our result also implies the weak version Goddyn's conjecture on the existence of thin trees on graphs with orientable bounded genus.