WorldWideScience

Sample records for approximately maximum-likelihood trees

  1. Approximate maximum likelihood estimation using data-cloning ABC

    OpenAIRE

    Picchini, Umberto; Anderson, Rachele

    2015-01-01

    A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called "data cloning" for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold ...

  2. On penalized maximum likelihood estimation of approximate factor models

    OpenAIRE

    Wang, Shaoxin; Yang, Hu; Yao, Chaoli

    2016-01-01

    In this paper, we mainly focus on the estimation of high-dimensional approximate factor model. We rewrite the estimation of error covariance matrix as a new form which shares similar properties as the penalized maximum likelihood covariance estimator given by Bien and Tibshirani(2011). Based on the lagrangian duality, we propose an APG algorithm to give a positive definite estimate of the error covariance matrix. The new algorithm for the estimation of approximate factor model has a desirable...

  3. On the approximate maximum likelihood estimation for diffusion processes

    CERN Document Server

    Chang, Jinyuan; 10.1214/11-AOS922

    2012-01-01

    The transition density of a diffusion process does not admit an explicit expression in general, which prevents the full maximum likelihood estimation (MLE) based on discretely observed sample paths. A\\"{\\i}t-Sahalia [J. Finance 54 (1999) 1361--1395; Econometrica 70 (2002) 223--262] proposed asymptotic expansions to the transition densities of diffusion processes, which lead to an approximate maximum likelihood estimation (AMLE) for parameters. Built on A\\"{\\i}t-Sahalia's [Econometrica 70 (2002) 223--262; Ann. Statist. 36 (2008) 906--937] proposal and analysis on the AMLE, we establish the consistency and convergence rate of the AMLE, which reveal the roles played by the number of terms used in the asymptotic density expansions and the sampling interval between successive observations. We find conditions under which the AMLE has the same asymptotic distribution as that of the full MLE. A first order approximation to the Fisher information matrix is proposed.

  4. Approximate Maximum Likelihood Commercial Bank Loan Management Model

    Directory of Open Access Journals (Sweden)

    Godwin N.O.   Asemota

    2009-01-01

    Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.

  5. Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: approximate methods.

    Science.gov (United States)

    Yang, Z

    1994-09-01

    Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good, and four such categories appear to be sufficient to produce both an optimum, or near-optimum fit by the model to the data, and also an acceptable approximation to the continuous distribution. The second method, called "fixed-rates model", classifies sites into several classes according to their rates predicted assuming the star tree. Sites in different classes are then assumed to be evolving at these fixed rates when other tree topologies are evaluated. Analyses of the data sets suggest that this method can produce reasonable results, but it seems to share some properties of a least-squares pairwise comparison; for example, interior branch lengths in nonbest trees are often found to be zero. The computational requirements of the two methods are comparable to that of Felsenstein's (1981, J Mol Evol 17:368-376) model, which assumes a single rate for all the sites. PMID:7932792

  6. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis.

    Science.gov (United States)

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-07-01

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at It is free and open to all users and there is no login requirement. PMID:27084950

  7. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    Directory of Open Access Journals (Sweden)

    Kodner Robin B

    2010-10-01

    Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.

  8. Maximum Likelihood Mosaics

    CERN Document Server

    Pires, Bernardo Esteves

    2010-01-01

    The majority of the approaches to the automatic recovery of a panoramic image from a set of partial views are suboptimal in the sense that the input images are aligned, or registered, pair by pair, e.g., consecutive frames of a video clip. These approaches lead to propagation errors that may be very severe, particularly when dealing with videos that show the same region at disjoint time intervals. Although some authors have proposed a post-processing step to reduce the registration errors in these situations, there have not been attempts to compute the optimal solution, i.e., the registrations leading to the panorama that best matches the entire set of partial views}. This is our goal. In this paper, we use a generative model for the partial views of the panorama and develop an algorithm to compute in an efficient way the Maximum Likelihood estimate of all the unknowns involved: the parameters describing the alignment of all the images and the panorama itself.

  9. Shrinkage Effect in Ancestral Maximum Likelihood

    CERN Document Server

    Mossel, Elchanan; Steel, Mike

    2008-01-01

    Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...

  10. Maximum-likelihood absorption tomography

    International Nuclear Information System (INIS)

    Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)

  11. Regularized Maximum Likelihood for Intrinsic Dimension Estimation

    CERN Document Server

    Gupta, Mithun Das

    2012-01-01

    We propose a new method for estimating the intrinsic dimension of a dataset by applying the principle of regularized maximum likelihood to the distances between close neighbors. We propose a regularization scheme which is motivated by divergence minimization principles. We derive the estimator by a Poisson process approximation, argue about its convergence properties and apply it to a number of simulated and real datasets. We also show it has the best overall performance compared with two other intrinsic dimension estimators.

  12. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  13. The Maximum Likelihood Threshold of a Graph

    OpenAIRE

    Gross, Elizabeth; Sullivant, Seth

    2014-01-01

    The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...

  14. Simulated Maximum Likelihood using Tilted Importance Sampling

    OpenAIRE

    Christian N. Brinch

    2008-01-01

    Abstract: This paper develops the important distinction between tilted and simple importance sampling as methods for simulating likelihood functions for use in simulated maximum likelihood. It is shown that tilted importance sampling removes a lower bound to simulation error for given importance sample size that is inherent in simulated maximum likelihood using simple importance sampling, the main method for simulating likelihood functions in the statistics literature. In addit...

  15. ''No-background'' maximum likelihood analysis in HBT interferometry

    International Nuclear Information System (INIS)

    We present a new 'no-background' procedure, based on the maximum likelihood method, for fitting the space-time size parameters of the particle production region in ultra-relativistic heavy ion collisions. This procedure uses an approximation to avoid the necessity of constructing a mixed-event background before fitting the data. (orig.)

  16. A maximum likelihood method for particle momentum determination

    International Nuclear Information System (INIS)

    We discuss a maximum likelihood method for determining a charged particle's momentum as it moves in a magnetic field. The formalism is presented in both rigorous and approximate forms. The rigorous form is valid when random processes include multiple scattering, energy loss and detector spatial resolution. When the measurement error is dominated by multiple scattering, it takes a particularly simple approximate form. The validity of both formalisms extends to include non-Gaussian multiple scattering distribution

  17. Applications of Maximum Likelihood Algorithm in Asynchronous CDMA Systems

    OpenAIRE

    Xiao, P; Strom, E

    2002-01-01

    We treat the problems of propagation delay and channel estimation as well as data detection of orthogonally modulated signals in an asynchronous DS-CDMA system over fading channels using the maximum likelihood (ML) approach. The overwhelming computational complexity of the ML algorithm makes it unfeasible for implementation. The emphasis of this paper is to reduce its complexity by some approximation methods. The derived approximative ML schemes are compared with conventional algorithms as we...

  18. Model Fit after Pairwise Maximum Likelihood.

    Science.gov (United States)

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  19. Penalized maximum likelihood estimation and variable selection in geostatistics

    CERN Document Server

    Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919

    2012-01-01

    We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...

  20. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and......In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics...

  1. Maximum-likelihood fits to histograms for improved parameter estimation

    CERN Document Server

    Fowler, Joseph W

    2013-01-01

    Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  2. Reconstruction of diagonal elements of density matrix using maximum likelihood estimation

    International Nuclear Information System (INIS)

    The data of the experiment of Schiller et al., Physic Letters 77(1996), are alternatively evaluated using the maximum likelihood estimation. The given data are fitted better than by the standard deterministic approach. Nevertheless, the data are fitted equally well by a whole family of states. Standard deterministic predictions correspond approximately to the envelope of these maximum likelihood solutions. (author)

  3. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the...... equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...... any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does...

  4. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    Science.gov (United States)

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  5. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  6. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  7. A Maximum Likelihood Approach to Correlational Outlier Identification.

    Science.gov (United States)

    Bacon, Donald R.

    1995-01-01

    A maximum likelihood approach to correlational outlier identification is introduced and compared to the Mahalanobis D squared and Comrey D statistics through Monte Carlo simulation. Identification performance depends on the nature of correlational outliers and the measure used, but the maximum likelihood approach is the most robust performance…

  8. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  9. Study on the Hungarian algorithm for the maximum likelihood data association problem

    Institute of Scientific and Technical Information of China (English)

    Wang Jianguo; He Peikun; Cao Wei

    2007-01-01

    A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the na(i)ve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.

  10. An iterative maximum-likelihood polychromatic algorithm for CT

    OpenAIRE

    De Man, Bruno; Nuyts, Johan; DUPONT, Patrick; Marchal, Guy; Suetens, Paul

    2001-01-01

    De Man B., Nuyts J., Dupont P., Marchal G., Suetens P., ''An iterative maximum-likelihood polychromatic algorithm for CT'', IEEE transactions on medical imaging, vol. 20, no. 10, pp. 999-1008, October 2001.

  11. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  12. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...... ultrasound pulse, which includes a maximum likelihood attenuation estimator, is derived. The results of this correspondence are of great importance for deconvolution and attenuation imaging in medical ultrasound...

  13. Maximum likelihood factor analysis of the reactor coolant pump system

    International Nuclear Information System (INIS)

    In today's operating environment of nuclear power plants, setpoints are established for key plant parameters, such as temperature, pressure, and flow rate. Reducing excursions beyond these setpoints would save millions of dollars as a result of improved plant availability and improve plant safety as well. The statistical method of maximum likelihood factor analysis is presented, and the results of two computer runs are given. The results of the statistical analysis indicate that it is possible to consistently rank order the eleven tracked variables of the reactor coolant system. Implementation of the maximum likelihood factor method would permit the decision maker to predict unanticipated transients and reduce plant unavailability

  14. Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative Grenzwert

    OpenAIRE

    Johannes, Jan

    2002-01-01

    Es sei X eine Zufallsvariable mit unbekannter Verteilung P. Zu den Hauptaufgaben der Mathematischen Statistik zählt die Konstruktion von Schätzungen für einen abgeleiteten Parameter theta(P) mit Hilfe einer Beobachtung X=x. Im Fall einer dominierten Verteilungsfamilie ist es möglich, das Maximum-Likelihood-Prinzip (MLP) anzuwenden. Eine Alternative dazu liefert der Bayessche Zugang. Insbesondere erweist sich unter Regularitätsbedingungen, dass die Maximum-Likelihood-Schätzung (MLS) dem Gren...

  15. Modified maximum likelihood registration based on information fusion

    Institute of Scientific and Technical Information of China (English)

    Yongqing Qi; Zhongliang Jing; Shiqiang Hu

    2007-01-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  16. A Rayleigh Doppler frequency estimator derived from maximum likelihood theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul

    1999-01-01

    capacities in low and high speed situations. We derive a Doppler frequency estimator using the maximum likelihood method and Jakes model (1974) of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verified through simulations and found to yield good...

  17. A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul

    1999-01-01

    capacities in low and high speed situations.We derive a Doppler frequency estimatorusing the maximum likelihood method and Jakes model [\\ref{Jakes}] of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verifiedthrough simulations and found to yield...

  18. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  19. A scalable maximum likelihood method for quantum state tomography

    International Nuclear Information System (INIS)

    The principle of maximum likelihood reconstruction has proven to yield satisfactory results in the context of quantum state tomography for many-body systems of moderate system sizes. Until recently, however, quantum state tomography has been considered to be infeasible for systems consisting of a large number of subsystems due to the exponential growth of the Hilbert space dimension with the number of constituents. Several reconstruction schemes have been proposed since then to overcome the two main obstacles in quantum many-body tomography: experiment time and post-processing resources. Here we discuss one strategy to address these limitations for the maximum likelihood principle by adopting a particular state representation to merge a well established reconstruction algorithm maximizing the likelihood with techniques known from quantum many-body theory. (paper)

  20. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  1. Maximum Likelihood Estimation in Panels with Incidental Trends

    OpenAIRE

    Moon, Hyungsik; Phillips, Peter C. B.

    1999-01-01

    It is shown that the maximum likelihood estimator of a local to unity parameter can be consistently estimated with panel data when the cross section observations are independent. Consistency applies when there are no deterministic trends or when there is a homogeneous deterministic trend in the panel model. When there are heterogeneous deterministic trends the panel MLE of the local to unity parameter is inconsistent. This outcome provides a new instance of inconsistent ML estimation in dynam...

  2. Monte Carlo maximum likelihood estimation for discretely observed diffusion processes

    OpenAIRE

    Beskos, Alexandros; Papaspiliopoulos, Omiros; Roberts, Gareth

    2009-01-01

    This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s.\\@ continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte C...

  3. Maximum-likelihood estimation of recent shared ancestry (ERSA)

    OpenAIRE

    Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L; Woodward, Scott R.; Jorde, Lynn B

    2011-01-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number a...

  4. Maximum Likelihood and the Bootstrap for Nonlinear Dynamic Models

    OpenAIRE

    Goncalves, Silvia; White, Halbert

    2002-01-01

    The bootstrap is an increasingly popular method for performing statistical inference. This paper provides the theoretical foundation for using the bootstrap as a valid tool of inference for quasi-maximum likelihood estimators (QMLE). We provide a unified framework for analyzing bootstrapped extremum estimators of nonlinear dynamic models for heterogeneous dependent stochastic processes. We apply our results to two block bootstrap methods, the moving blocks bootstrap of Künsch (1989) and Liu a...

  5. Testing, monitoring, and dating structural changes in maximum likelihood models

    OpenAIRE

    Zeileis, Achim; Shah, Ajay; Patnaik, Ila

    2008-01-01

    A unified toolbox for testing, monitoring, and dating structural changes is provided for likelihood-based regression models. In particular, least-squares methods for dating breakpoints are extended to maximum likelihood estimation. The usefulness of all techniques is illustrated by assessing the stability of de facto exchange rate regimes. The toolbox is used for investigating the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar in 2005 and tracking t...

  6. A Rayleigh Doppler frequency estimator derived from maximum likelihood theory

    OpenAIRE

    Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul

    1999-01-01

    Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers. The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminal movement can optimize cell capacities in low and high speed situations. We derive a Doppler frequency estimator using the maximum likelihood method and Jakes model (1974) of a Rayleigh fading channel. This estimator requires an FF...

  7. Maximum Likelihood Estimation for an Innovation Diffusion Model of New Product Acceptance

    OpenAIRE

    David C Schmittlein; Vijay Mahajan

    1982-01-01

    A maximum likelihood approach is proposed for estimating an innovation diffusion model of new product acceptance originally considered by Bass (Bass, F. M. 1969. A new product growth model for consumer durables. (January) 215–227.). The suggested approach allows: (1) computation of approximate standard errors for the diffusion model parameters, and (2) determination of the required sample size for forecasting the adoption level to any desired degree of accuracy. Using histograms from eight di...

  8. Operational risk models and maximum likelihood estimation error for small sample-sizes

    OpenAIRE

    Paul Larsen

    2015-01-01

    Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by as...

  9. A Maximum-Likelihood Approach to Force-Field Calibration.

    Science.gov (United States)

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2

  10. Maximum Likelihood Localization of Radiation Sources with unknown Source Intensity

    CERN Document Server

    Baidoo-Williams, Henry E

    2016-01-01

    In this paper, we consider a novel and robust maximum likelihood approach to localizing radiation sources with unknown statistics of the source signal strength. The result utilizes the smallest number of sensors required theoretically to localize the source. It is shown, that should the source lie in the open convex hull of the sensors, precisely $N+1$ are required in $\\mathbb{R}^N, ~N \\in \\{1,\\cdots,3\\}$. It is further shown that the region of interest, the open convex hull of the sensors, is entirely devoid of false stationary points. An augmented gradient ascent algorithm with random projections should an estimate escape the convex hull is presented.

  11. Maximum likelihood decay curve fits by the simplex method

    International Nuclear Information System (INIS)

    A multicomponent decay curve analysis technique has been developed and incorporated into the decay curve fitting computer code, MLDS (maximum likelihood decay by the simplex method). The fitting criteria are based on the maximum likelihood technique for decay curves made up of time binned events. The probabilities used in the likelihood functions are based on the Poisson distribution, so decay curves constructed from a small number of events are treated correctly. A simple utility is included which allows the use of discrete event times, rather than time-binned data, to make maximum use of the decay information. The search for the maximum in the multidimensional likelihood surface for multi-component fits is performed by the simplex method, which makes the success of the iterative fits extremely insensitive to the initial values of the fit parameters and eliminates the problems of divergence. The simplex method also avoids the problem of programming the partial derivatives of the decay curves with respect to all the variable parameters, which makes the implementation of new types of decay curves straightforward. Any of the decay curve parameters can be fixed or allowed to vary. Asymmetric error limits for each of the free parameters, which do not consider the covariance of the other free parameters, are determined. A procedure is presented for determining the error limits which contain the associated covariances. The curve fitting procedure in MLDS can easily be adapted for fits to other curves with any functional form. (orig.)

  12. Massively Parallel Spatially-Variant Maximum Likelihood Image Restoration

    Science.gov (United States)

    Boden, A. F.; Redding, D. C.; Hanisch, R. J.; Mo, J.

    Motivated by attributes of images from the Hubble Space Telescope (HST) Wide Field/Planetary Cameras (WF/PC-1 and WFPC-2), in this paper we report on massively parallel implementations of maximum likelihood image restoration with spatially-variant point-spread (SV-PSF) models. We use an interpolative procedure to realize a SV-PSF model from sparse reference data, and realize the large amount of concurrency inherent in the restoration computation by employing a Trussel & Hunt-style segmentation of the restoration task, distributing the work load on a network of UNIX workstations using the public domain PVM system. We give examples of application of the restoration code to recent WFPC2 observations of HH 47.

  13. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    CERN Document Server

    Agnese, R; Balakishiyeva, D; Thakur, R Basu; Bauer, D A; Billard, J; Borgland, A; Bowles, M A; Brandt, D; Brink, P L; Bunker, R; Cabrera, B; Caldwell, D O; Cerdeno, D G; Chagani, H; Chen, Y; Cooley, J; Cornell, B; Crewdson, C H; Cushman, P; Daal, M; Di Stefano, P C F; Doughty, T; Esteban, L; Fallows, S; Figueroa-Feliciano, E; Fritts, M; Godfrey, G L; Golwala, S R; Graham, M; Hall, J; Harris, H R; Hertel, S A; Hofer, T; Holmgren, D; Hsu, L; Huber, M E; Jastram, A; Kamaev, O; Kara, B; Kelsey, M H; Kennedy, A; Kiveni, M; Koch, K; Leder, A; Loer, B; Asamar, E Lopez; Mahapatra, R; Mandic, V; Martinez, C; McCarthy, K A; Mirabolfathi, N; Moffatt, R A; Moore, D C; Nelson, R H; Oser, S M; Page, K; Page, W A; Partridge, R; Pepin, M; Phipps, A; Prasad, K; Pyle, M; Qiu, H; Rau, W; Redl, P; Reisetter, A; Ricci, Y; Rogers, H E; Saab, T; Sadoulet, B; Sander, J; Schneck, K; Schnee, R W; Scorza, S; Serfass, B; Shank, B; Speller, D; Upadhyayula, S; Villano, A N; Welliver, B; Wright, D H; Yellin, S; Yen, J J; Young, B A; Zhang, J

    2014-01-01

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  14. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  15. Preliminary attempt on maximum likelihood tomosynthesis reconstruction of DEI data

    International Nuclear Information System (INIS)

    Tomosynthesis is a three-dimension reconstruction method that can remove the effect of superimposition with limited angle projections. It is especially promising in mammography where radiation dose is concerned. In this paper, we propose a maximum likelihood tomosynthesis reconstruction algorithm (ML-TS) on the apparent absorption data of diffraction enhanced imaging (DEI). The motivation of this contribution is to develop a tomosynthesis algorithm in low-dose or noisy circumstances and make DEI get closer to clinic application. The theoretical statistical models of DEI data in physics are analyzed and the proposed algorithm is validated with the experimental data at the Beijing Synchrotron Radiation Facility (BSRF). The results of ML-TS have better contrast compared with the well known 'shift-and-add' algorithm and FBP algorithm. (authors)

  16. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  17. Analytical maximum likelihood estimation of stellar magnetic fields

    CERN Document Server

    González, M J Martínez; Ramos, A Asensio; Belluzzi, L

    2011-01-01

    The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.

  18. Predicting unprotected reactor upset response using the maximum likelihood method

    International Nuclear Information System (INIS)

    A number of advanced reactor concepts incorporate intrinsic design features that act to safely limit reactor response during upsets. In the integral fast reactor (IFR) concept, for example, metallic fuel is used to provide sufficient negative reactivity feedback to achieve a safe response for a number of unprotected upsets. In reactors such as the IFR that rely on passive features for part of their safety, the licensing of these systems will probably require that they be periodically tested to verify proper operation. Commercial light water plants have similar requirements for active safety systems. The approach to testing considered in this paper involves determining during normal operation the values of key reactor parameters that govern the unprotected reactor response and then using these values to predict upset response. The values are determined using the maximum likelihood method. If the predicted reactor response is within safe limits, then one concludes that the intrinsic safety features are operating correctly

  19. Stochastic Maximum Likelihood (SML parametric estimation of overlapped Doppler echoes

    Directory of Open Access Journals (Sweden)

    E. Boyer

    2004-11-01

    Full Text Available This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC in Arecibo, Puerto Rico, in 1998.

  20. Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes

    Science.gov (United States)

    Boyer, E.; Petitdidier, M.; Larzabal, P.

    2004-11-01

    This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.

  1. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  2. A maximum likelihood approach to the destriping technique

    CERN Document Server

    Keihanen, E; Poutanen, T; Maino, D; Burigana, C

    2003-01-01

    The destriping technique is a viable tool for removing different kinds of systematic effects in CMB related experiments. It has already been proven to work for gain instabilities that produce the so-called 1/f noise and periodic fluctuations due to e.g. thermal instability. Both effects when coupled with the observing strategy result in stripes on the observed sky region. Here we present a maximum-likelihood approach to this type of technique and provide also a useful generalization. As a working case we consider a data set similar to what the Planck satellite will produce in its Low Frequency Instrument (LFI). We compare our method to those presented in the literature and find some improvement in performance. Our approach is also more general and allows for different base functions to be used when fitting the systematic effect under consideration. We study the effect of increasing the number of these base functions on the quality of signal cleaning and reconstruction. This study is related to Planck LFI acti...

  3. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... incorrect velocity estimates due to the multiple peaks, when the velocity search range is increased above the maximum detectable velocity. The root-mean square error (RMS) on the velocity estimates for the simulated data is on the order of 7 cm/s (14%) for the CMLE, and it is comparable to the RMS for the...... CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed on in...

  4. A Maximum Likelihood Approach to Least Absolute Deviation Regression

    Directory of Open Access Journals (Sweden)

    Yinbo Li

    2004-09-01

    Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.

  5. Efficient scatter modelling for incorporation in maximum likelihood reconstruction

    International Nuclear Information System (INIS)

    Definition of a simplified model of scatter which can be incorporated in maximum likelihood reconstruction for single-photon emission tomography (SPET) continues to be appealing; however, implementation must be efficient for it to be clinically applicable. In this paper an efficient algorithm for scatter estimation is described in which the spatial scatter distribution is implemented as a spatially invariant convolution for points of constant depth in tissue. The scatter estimate is weighted by a space-dependent build-up factor based on the measured attenuation in tissue. Monte Carlo simulation of a realistic thorax phantom was used to validate this approach. Further efficiency was introduced by estimating scatter once after a small number of iterations using the ordered subsets expectation maximisation (OSEM) reconstruction algorithm. The scatter estimate was incorporated as a constant term in subsequent iterations rather than modifying the scatter estimate each iteration. Monte Carlo simulation was used to demonstrate that the scatter estimate does not change significantly provided at least two iterations OSEM reconstruction, subset size 8, is used. Complete scatter-corrected reconstruction of 64 projections of 40 x 128 pixels was achieved in 38 min using a Sun Sparc20 computer. (orig.)

  6. tmle : An R Package for Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Susan Gruber

    2012-11-01

    Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.

  7. Maximum likelihood estimation of shear wave speed in transient elastography.

    Science.gov (United States)

    Audière, Stéphane; Angelini, Elsa D; Sandrin, Laurent; Charbit, Maurice

    2014-06-01

    Ultrasonic transient elastography (TE), enables to assess, under active mechanical constraints, the elasticity of the liver, which correlates with hepatic fibrosis stages. This technique is routinely used in clinical practice to assess noninvasively liver stiffness. The Fibroscan system used in this work generates a shear wave via an impulse stress applied on the surface of the skin and records a temporal series of radio-frequency (RF) lines using a single-element ultrasound probe. A shear wave propagation map (SWPM) is generated as a 2-D map of the displacements along depth and time, derived from the correlations of the sequential 1-D RF lines, assuming that the direction of propagation (DOP) of the shear wave coincides with the ultrasound beam axis (UBA). Under the assumption of pure elastic tissue, elasticity is proportional to the shear wave speed. This paper introduces a novel approach to the processing of the SWPM, deriving the maximum likelihood estimate of the shear wave speed when comparing the observed displacements and the estimates provided by the Green's functions. A simple parametric model is used to interface Green's theoretical values of noisy measures provided by the SWPM, taking into account depth-varying attenuation and time-delay. The proposed method was evaluated on numerical simulations using a finite element method simulator and on physical phantoms. Evaluation on this test database reported very high agreements of shear wave speed measures when DOP and UBA coincide. PMID:24835213

  8. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

    Energy Technology Data Exchange (ETDEWEB)

    Nix, D.A.; Hogden, J.E.

    1998-12-01

    The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.

  9. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d2], where t is the time and d is dose. The coefficient of the d2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  10. The Multi-Mission Maximum Likelihood framework (3ML)

    CERN Document Server

    Vianello, Giacomo; Younk, Patrick; Tibaldo, Luigi; Burgess, James M; Ayala, Hugo; Harding, Patrick; Hui, Michelle; Omodei, Nicola; Zhou, Hao

    2015-01-01

    Astrophysical sources are now observed by many different instruments at different wavelengths, from radio to high-energy gamma-rays, with an unprecedented quality. Putting all these data together to form a coherent view, however, is a very difficult task. Each instrument has its own data format, software and analysis procedure, which are difficult to combine. It is for example very challenging to perform a broadband fit of the energy spectrum of the source. The Multi-Mission Maximum Likelihood framework (3ML) aims to solve this issue, providing a common framework which allows for a coherent modeling of sources using all the available data, independent of their origin. At the same time, thanks to its architecture based on plug-ins, 3ML uses the existing official software of each instrument for the corresponding data in a way which is transparent to the user. 3ML is based on the likelihood formalism, in which a model summarizing our knowledge about a particular region of the sky is convolved with the instrument...

  11. Maximum Likelihood Classification of High-Resolution SAR Images in Urban Area

    Science.gov (United States)

    Soheili Majd, M.; Simonetto, E.; Polidori, L.

    2011-09-01

    In this work, we propose a state-of-the-art on statistical analysis of polarimetric synthetic aperture radar (SAR) data, through the modeling of several indices. We concentrate on eight ground classes which have been carried out from amplitudes, co-polarisation ratio, depolarization ratios, and other polarimetric descriptors. To study their different statistical behaviours, we consider Gauss, log- normal, Beta I, Weibull, Gamma, and Fisher statistical models and estimate their parameters using three methods: method of moments (MoM), maximum-likelihood (ML) methodology, and log-cumulants method (MoML). Then, we study the opportunity of introducing this information in an adapted supervised classification scheme based on Maximum-Likelihood and Fisher pdf. Our work relies on an image of a suburban area, acquired by the airborne RAMSES SAR sensor of ONERA. The results prove the potential of such data to discriminate urban surfaces and show the usefulness of adapting any classical classification algorithm however classification maps present a persistant class confusion between flat gravelled or concrete roofs and trees.

  12. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  13. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    Full Text Available The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  14. On maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions.

    Science.gov (United States)

    Hornik, Kurt; Grün, Bettina

    2014-01-01

    Maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions involves inverting the ratio [Formula: see text] of modified Bessel functions and computational methods are required to invert these functions using approximative or iterative algorithms. In this paper we use Amos-type bounds for [Formula: see text] to deduce sharper bounds for the inverse function, determine the approximation error of these bounds, and use these to propose a new approximation for which the error tends to zero when the inverse of [Formula: see text] is evaluated at values tending to [Formula: see text] (from the left). We show that previously introduced rational bounds for [Formula: see text] which are invertible using quadratic equations cannot be used to improve these bounds. PMID:25309045

  15. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  16. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model......The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is...

  17. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi;

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  18. A viable method for goodness-of-fit test in maximum likelihood fit

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng; GAO Yuan-Ning; HUO Lei

    2011-01-01

    A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.

  19. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    OpenAIRE

    Wang Chenguang; Li Hongying; Wang Zhong; Wang Yaqun; Wang Ningtao; Wang Zuoheng; Wu Rongling

    2012-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood...

  20. Which quantile is the most informative? Maximum likelihood, maximum entropy and quantile regression

    OpenAIRE

    Bera, A. K.; Galvao Jr, A. F.; Montes-Rojas, G.; Park, S. Y.

    2010-01-01

    This paper studies the connections among quantile regression, the asymmetric Laplace distribution, maximum likelihood and maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of the mean and median. Using the resulting score functions we propose an estimator based on the joint estimating equations. This approach delivers estimates for the slope parameters toge...

  1. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    OpenAIRE

    Spackman, K. A.

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates...

  2. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    Science.gov (United States)

    Chen, C. E.; Lorenzelli, F.; Hudson, R. E.; Yao, K.

    2007-12-01

    We investigate the maximum likelihood (ML) direction-of-arrival (DOA) estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB) has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML) requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML) attain a solution close to the derived CRB at high signal-to-noise ratio.

  3. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    Directory of Open Access Journals (Sweden)

    K. Yao

    2007-12-01

    Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.

  4. Fusion of hyperspectral and lidar data based on dimension reduction and maximum likelihood

    Science.gov (United States)

    Abbasi, B.; Arefi, H.; Bigdeli, B.; Motagh, M.; Roessner, S.

    2015-04-01

    Limitations and deficiencies of different remote sensing sensors in extraction of different objects caused fusion of data from different sensors to become more widespread for improving classification results. Using a variety of data which are provided from different sensors, increase the spatial and the spectral accuracy. Lidar (Light Detection and Ranging) data fused together with hyperspectral images (HSI) provide rich data for classification of the surface objects. Lidar data representing high quality geometric information plays a key role for segmentation and classification of elevated features such as buildings and trees. On the other hand, hyperspectral data containing high spectral resolution would support high distinction between the objects having different spectral information such as soil, water, and grass. This paper presents a fusion methodology on Lidar and hyperspectral data for improving classification accuracy in urban areas. In first step, we applied feature extraction strategies on each data separately. In this step, texture features based on GLCM (Grey Level Co-occurrence Matrix) from Lidar data and PCA (Principal Component Analysis) and MNF (Minimum Noise Fraction) based dimension reduction methods for HSI are generated. In second step, a Maximum Likelihood (ML) based classification method is applied on each feature spaces. Finally, a fusion method is applied to fuse the results of classification. A co-registered hyperspectral and Lidar data from University of Houston was utilized to examine the result of the proposed method. This data contains nine classes: Building, Tree, Grass, Soil, Water, Road, Parking, Tennis Court and Running Track. Experimental investigation proves the improvement of classification accuracy to 88%.

  5. Efficient Near Maximum-Likelihood Detection for Underdetermined MIMO Antenna Systems Using a Geometrical Approach

    Directory of Open Access Journals (Sweden)

    Arogyaswami Paulraj

    2007-12-01

    Full Text Available Maximum-likelihood (ML detection is guaranteed to yield minimum probability of erroneous detection and is thus of great importance for both multiuser detection and space-time decoding. For multiple-input multiple-output (MIMO antenna systems where the number of receive antennas is at least the number of signals multiplexed in the spatial domain, ML detection can be done efficiently using sphere decoding. Suboptimal detectors are also well known to have reasonable performance at low complexity. It is, nevertheless, much less understood for obtaining good detection at affordable complexity if there are less receive antennas than transmitted signals (i.e., underdetermined MIMO systems. In this paper, our aim is to develop an effcient detection strategy that can achieve near ML performance for underdetermined MIMO systems. Our method is based on the geometrical understanding that the ML point happens to be a point that is “close” to the decoding hyperplane in all directions. The fact that such proximity-close points are much less is used to devise a decoding method that promises to greatly reduce the decoding complexity while achieving near ML performance. An average-case complexity analysis based on Gaussian approximation is also given.

  6. Implementation of linear filters for iterative penalized maximum likelihood SPECT reconstruction

    International Nuclear Information System (INIS)

    This paper reports on six low-pass linear filters applied in frequency space implemented for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The filters implemented were the Shepp-Logan filter, the Butterworth filer, the Gaussian filter, the Hann filter, the Parzen filer, and the Lagrange filter. The low-pass filtering was applied in frequency space to projection data for the initial estimate and to the difference of projection data and reprojected data for higher order approximations. The projection data were acquired experimentally from a chest phantom consisting of non-uniform attenuating media. All the filters could effectively remove the noise and edge artifacts associated with ML approach if the frequency cutoff was properly chosen. The improved performance of the Parzen and Lagrange filters relative to the others was observed. The best image, by viewing its profiles in terms of noise-smoothing, edge-sharpening, and contrast, was the one obtained with the Parzen filter. However, the Lagrange filter has the potential to consider the characteristics of detector response function

  7. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    Science.gov (United States)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-04-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.

  8. ROC study of maximum likelihood estimator human brain image reconstructions in PET clinical practice

    International Nuclear Information System (INIS)

    This paper reports on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator A(MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18F-fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. The authors report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and the authors propose a variation based on reading three consecutive slices at a time, rating only the center slice

  9. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    Science.gov (United States)

    Spackman, K A

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606

  10. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  11. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  12. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  13. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  14. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    Science.gov (United States)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  15. Maximum-likelihood scintillation detection for EM-CCD based gamma cameras

    International Nuclear Information System (INIS)

    Gamma cameras based on charge-coupled devices (CCDs) coupled to continuous scintillation crystals can combine a good detection efficiency with high spatial resolutions with the aid of advanced scintillation detection algorithms. A previously developed analytical multi-scale algorithm (MSA) models the depth-dependent light distribution but does not take statistics into account. Here we present and validate a novel statistical maximum-likelihood algorithm (MLA) that combines a realistic light distribution model with an experimentally validated statistical model. The MLA was tested for an electron multiplying CCD optically coupled to CsI(Tl) scintillators of different thicknesses. For 99mTc imaging, the spatial resolution (for perpendicular and oblique incidence), energy resolution and signal-to-background counts ratio (SBR) obtained with the MLA were compared with those of the MSA. Compared to the MSA, the MLA improves the energy resolution by more than a factor of 1.6 and the SBR is enhanced by more than a factor of 1.3. For oblique incidence (approximately 450), the depth-of-interaction corrected spatial resolution is improved by a factor of at least 1.1, while for perpendicular incidence the MLA resolution does not consistently differ significantly from the MSA result for all tested scintillator thicknesses. For the thickest scintillator (3 mm, interaction probability 66% at 141 keV) a spatial resolution (perpendicular incidence) of 147 μm full width at half maximum (FWHM) was obtained with an energy resolution of 35.2% FWHM. These results of the MLA were achieved without prior calibration of scintillations as is needed for many statistical scintillation detection algorithms. We conclude that the MLA significantly improves the gamma camera performance compared to the MSA.

  16. Lesion quantification in oncological positron emission tomography: a maximum likelihood partial volume correction strategy.

    Science.gov (United States)

    De Bernardi, Elisabetta; Faggiano, Elena; Zito, Felicia; Gerundini, Paolo; Baselli, Giuseppe

    2009-07-01

    A maximum likelihood (ML) partial volume effect correction (PVEC) strategy for the quantification of uptake and volume of oncological lesions in 18F-FDG positron emission tomography is proposed. The algorithm is based on the application of ML reconstruction on volumetric regional basis functions initially defined on a smooth standard clinical image and iteratively updated in terms of their activity and volume. The volume of interest (VOI) containing a previously detected region is segmented by a k-means algorithm in three regions: A central region surrounded by a partial volume region and a spill-out region. All volume outside the VOI (background with all other structures) is handled as a unique basis function and therefore "frozen" in the reconstruction process except for a gain coefficient. The coefficients of the regional basis functions are iteratively estimated with an attenuation-weighted ordered subset expectation maximization (AWOSEM) algorithm in which a 3D, anisotropic, space variant model of point spread function (PSF) is included for resolution recovery. The reconstruction-segmentation process is iterated until convergence; at each iteration, segmentation is performed on the reconstructed image blurred by the system PSF in order to update the partial volume and spill-out regions. The developed PVEC strategy was tested on sphere phantom studies with activity contrasts of 7.5 and 4 and compared to a conventional recovery coefficient method. Improved volume and activity estimates were obtained with low computational costs, thanks to blur recovery and to a better local approximation to ML convergence. PMID:19673203

  17. Modified Maximum Likelihood Estimation from Censored Samples in Burr Type X Distribution

    Directory of Open Access Journals (Sweden)

    R.R.L. Kantam

    2015-12-01

    Full Text Available The two parameter Burr type X distribution is considered and its scale parameter is estimated from a censored sample using the classical maximum likelihood method. The estimating equations are modified to get simpler and efficient estimators. Two methods of modification are suggested. The small sample efficiencies are presented.

  18. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to...

  19. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    Science.gov (United States)

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  20. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    Science.gov (United States)

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  1. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning.

    Science.gov (United States)

    Wang, Hui; Rose, Sherri; van der Laan, Mark J

    2011-07-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach. PMID:21572586

  2. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning

    OpenAIRE

    Wang, Hui; Rose, Sherri; van der Laan, Mark J.

    2011-01-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach.

  3. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  4. Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids

    DEFF Research Database (Denmark)

    Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt; Jensen, Jesper

    We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic. The...

  5. MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters

    Directory of Open Access Journals (Sweden)

    Sugaya Yuki

    2012-08-01

    Full Text Available Abstract Background Linkage analysis is a useful tool for detecting genetic variants that regulate a trait of interest, especially genes associated with a given disease. Although penetrance parameters play an important role in determining gene location, they are assigned arbitrary values according to the researcher’s intuition or as estimated by the maximum likelihood principle. Several methods exist by which to evaluate the maximum likelihood estimates of penetrance, although not all of these are supported by software packages and some are biased by marker genotype information, even when disease development is due solely to the genotype of a single allele. Findings Programs for exploring the maximum likelihood estimates of penetrance parameters were developed using the R statistical programming language supplemented by external C functions. The software returns a vector of polynomial coefficients of penetrance parameters, representing the likelihood of pedigree data. From the likelihood polynomial supplied by the proposed method, the likelihood value and its gradient can be precisely computed. To reduce the effect of the supplied dataset on the likelihood function, feasible parameter constraints can be introduced into maximum likelihood estimates, thus enabling flexible exploration of the penetrance estimates. An auxiliary program generates a perspective plot allowing visual validation of the model’s convergence. The functions are collectively available as the MLEP R package. Conclusions Linkage analysis using penetrance parameters estimated by the MLEP package enables feasible localization of a disease locus. This is shown through a simulation study and by demonstrating how the package is used to explore maximum likelihood estimates. Although the input dataset tends to bias the likelihood estimates, the method yields accurate results superior to the analysis using intuitive penetrance values for disease with low allele frequencies. MLEP is

  6. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  7. Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case

    International Nuclear Information System (INIS)

    In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.

  8. Machine learning approximation techniques using dual trees

    OpenAIRE

    Ergashbaev, Denis

    2015-01-01

    This master thesis explores a dual-tree framework as applied to a particular class of machine learning problems that are collectively referred to as generalized n-body problems. It builds a new algorithm on top of it and improves existing Boosted OGE classifier.

  9. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    Directory of Open Access Journals (Sweden)

    Lin Lin

    2015-12-01

    Full Text Available Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks.

  10. LASER: A Maximum Likelihood Toolkit for Detecting Temporal Shifts in Diversification Rates From Molecular Phylogenies

    Directory of Open Access Journals (Sweden)

    Daniel L. Rabosky

    2006-01-01

    Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.

  11. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    Science.gov (United States)

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  12. String-averaging expectation-maximization for maximum likelihood estimation in emission tomography

    International Nuclear Information System (INIS)

    We study the maximum likelihood model in emission tomography and propose a new family of algorithms for its solution, called string-averaging expectation-maximization (SAEM). In the string-averaging algorithmic regime, the index set of all underlying equations is split into subsets, called ‘strings’, and the algorithm separately proceeds along each string, possibly in parallel. Then, the end-points of all strings are averaged to form the next iterate. SAEM algorithms with several strings present better practical merits than the classical row-action maximum-likelihood algorithm. We present numerical experiments showing the effectiveness of the algorithmic scheme, using data of image reconstruction problems. Performance is evaluated from the computational cost and reconstruction quality viewpoints. A complete convergence theory is also provided. (paper)

  13. THE GENERALIZED MAXIMUM LIKELIHOOD METHOD APPLIED TO HIGH PRESSURE PHASE EQUILIBRIUM

    Directory of Open Access Journals (Sweden)

    CARDOZO-FILHO Lúcio

    1997-01-01

    Full Text Available The generalized maximum likelihood method was used to determine binary interaction parameters between carbon dioxide and components of orange essential oil. Vapor-liquid equilibrium was modeled with Peng-Robinson and Soave-Redlich-Kwong equations, using a methodology proposed in 1979 by Asselineau, Bogdanic and Vidal. Experimental vapor-liquid equilibrium data on binary mixtures formed with carbon dioxide and compounds usually found in orange essential oil were used to test the model. These systems were chosen to demonstrate that the maximum likelihood method produces binary interaction parameters for cubic equations of state capable of satisfactorily describing phase equilibrium, even for a binary such as ethanol/CO2. Results corroborate that the Peng-Robinson, as well as the Soave-Redlich-Kwong, equation can be used to describe phase equilibrium for the following systems: components of essential oil of orange/CO2.

  14. A new unfolding code combining maximum entropy and maximum likelihood for neutron spectrum measurement

    International Nuclear Information System (INIS)

    We present a new spectrum unfolding code, the Maximum Entropy and Maximum Likelihood Unfolding Code (MEALU), based on the maximum likelihood method combined with the maximum entropy method, which can determine a neutron spectrum without requiring an initial guess spectrum. The Normal or Poisson distributions can be used for the statistical distribution. MEALU can treat full covariance data for a measured detector response and response function. The algorithm was verified through an analysis of mock-up data and its performance was checked by applying it to measured data. The results for measured data from the Joyo experimental fast reactor were also compared with those obtained by the conventional J-log method for neutron spectrum adjustment. It was found that MEALU has potential advantages over conventional methods with regard to preparation of a priori information and uncertainty estimation. (author)

  15. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  16. A multinomial maximum likelihood program /MUNOML/. [in modeling sensory and decision phenomena

    Science.gov (United States)

    Curry, R. E.

    1975-01-01

    A multinomial maximum likelihood program (MUNOML) for signal detection and for behavior models is discussed. It is found to be useful in day to day operation since it provides maximum flexibility with minimum duplicated effort. It has excellent convergence qualities and rarely goes beyond 10 iterations. A library of subroutines is being collected for use with MUNOML, including subroutines for a successive categories model and for signal detectability models.

  17. Maximum likelihood reconstruction in fully 3D PET via the SAGE algorithm

    International Nuclear Information System (INIS)

    The SAGE and ordered subsets algorithms have been proposed as fast methods to compute penalized maximum likelihood estimates in PET. We have implemented both for use in fully 3D PET and completed a preliminary evaluation. The technique used to compute the transition matrix is fully described. The evaluation suggests that the ordered subsets algorithm converges much faster than SAGE, but that it stops short of the optimal solution

  18. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    OpenAIRE

    Yao, K.; R. E. Hudson; F. Lorenzelli; C. E. Chen

    2007-01-01

    We investigate the maximum likelihood (ML) direction-of-arrival (DOA) estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB) has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonunifo...

  19. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    OpenAIRE

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. ...

  20. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    OpenAIRE

    Adel Ahmadi; Siamak Talebi

    2015-01-01

    Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML) detection approach for quasi-orthogonal space-time block code (QOSTBC). The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM) constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric in...

  1. Determination of linear displacement by envelope detection with maximum likelihood estimation

    International Nuclear Information System (INIS)

    We demonstrate in this report an envelope detection technique with maximum likelihood estimation in a least square sense for determining displacement. This technique is achieved by sampling the amplitudes of quadrature signals resulted from a heterodyne interferometer so that the resolution of displacement measurement of the order of λ/104 is experimentally verified. A phase unwrapping procedure is also described and experimentally demonstrated and indicates that the unambiguity range of displacement can be measured beyond a single wavelength.

  2. Investigation of spectral statistics of nuclear systems by maximum likelihood estimation method

    International Nuclear Information System (INIS)

    In this paper, maximum likelihood estimation technique is employed to consider the spectral statistics of nuclear systems in the nearest neighbor spacing distribution framework. With using the available empirical data, the spectral statistics of different sequences are analyzed. The ML-based estimated values propose more regular dynamics and also minimum uncertainties (variations very close to CRLB) in compare to other estimation methods. Also, the efficiencies of considered distribution functions are examined where suggest the least CRLB for Brody distribution.

  3. Maximum likelihood drift estimation for the mixing of two fractional Brownian motions

    OpenAIRE

    Mishura, Yuliya

    2015-01-01

    We construct the maximum likelihood estimator (MLE) of the unknown drift parameter $\\theta\\in \\mathbb{R}$ in the linear model $X_t=\\theta t+\\sigma B^{H_1}(t)+B^{H_2}(t),\\;t\\in[0,T],$ where $B^{H_1}$ and $B^{H_2}$ are two independent fractional Brownian motions with Hurst indices $\\frac12

  4. Performance of the maximum likelihood estimators for the parameters of multivariate generalized Gaussian distributions

    OpenAIRE

    Bombrun, Lionel; Pascal, Frédéric; Tourneret, Jean-Yves; Berthoumieu, Yannick

    2012-01-01

    This paper studies the performance of the maximum likelihood estimators (MLE) for the parameters of multivariate generalized Gaussian distributions. When the shape parameter belongs to ]0,1[, we have proved that the scatter matrix MLE exists and is unique up to a scalar factor. After providing some elements about this proof, an estimation algorithm based on a Newton-Raphson recursion is investigated. Some experiments illustrate the convergence speed of this algorithm. The bias and consistency...

  5. Preliminary application of maximum likelihood method in HL-2A Thomson scattering system

    International Nuclear Information System (INIS)

    Maximum likelihood method to process the data of HL-2A Thomson scattering system is presented. Using mathematical statistics, this method maximizes the possibility of the likeness between the theoretical data and the observed data, so that we could get more accurate result. It has been proved to be applicable in comparison with that of the ratios method, and some of the drawbacks in ratios method do not exist in this new one. (authors)

  6. Asymptotic Properties of Maximum Likelihood Estimates in the Mixed Poisson Model

    OpenAIRE

    Lambert, Diane; Tierney, Luke

    1984-01-01

    This paper considers the asymptotic behavior of the maximum likelihood estimators (mle's) of the probabilities of a mixed Poisson distribution with a nonparametric mixing distribution. The vector of estimated probabilities is shown to converge in probability to the vector of mixed probabilities at rate $n^{1/2-\\varepsilon}$ for any $\\varepsilon > 0$ under a generalized $\\chi^2$ distance function. It is then shown that any finite set of the mle's has the same joint limiting distribution as doe...

  7. Maximum Likelihood Estimation in Gaussian Chain Graph Models under the Alternative Markov Property

    OpenAIRE

    Drton, Mathias; Eichler, Michael

    2005-01-01

    The AMP Markov property is a recently proposed alternative Markov property for chain graphs. In the case of continuous variables with a joint multivariate Gaussian distribution, it is the AMP rather than the earlier introduced LWF Markov property that is coherent with data-generation by natural block-recursive regressions. In this paper, we show that maximum likelihood estimates in Gaussian AMP chain graph models can be obtained by combining generalized least squares and iterative proportiona...

  8. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations

    OpenAIRE

    Lee, Tai-Sung; Radak, Brian K.; Pabis, Anna; York, Darrin M.

    2012-01-01

    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfa...

  9. THE GENERALIZED MAXIMUM LIKELIHOOD METHOD APPLIED TO HIGH PRESSURE PHASE EQUILIBRIUM

    OpenAIRE

    Cardozo-Filho, Lúcio; STRAGEVITCH Luiz; Fred WOLFF; M. Angela A Meireles

    1997-01-01

    The generalized maximum likelihood method was used to determine binary interaction parameters between carbon dioxide and components of orange essential oil. Vapor-liquid equilibrium was modeled with Peng-Robinson and Soave-Redlich-Kwong equations, using a methodology proposed in 1979 by Asselineau, Bogdanic and Vidal. Experimental vapor-liquid equilibrium data on binary mixtures formed with carbon dioxide and compounds usually found in orange essential oil were used to test the model. These s...

  10. "Separating Information Maximum Likelihood Estimation of Realized Volatility and Covariance with Micro-Market Noise"

    OpenAIRE

    Naoto Kunitomo; Seisho Sato

    2008-01-01

    For estimating the realized volatility and covariance by using high frequency data, we introduce the Separating Information Maximum Likelihood (SIML) method when there are possibly micro-market noises. The resulting estimator is simple and it has the representation as a specific quadratic form of returns. The SIML estimator has reasonable asymptotic properties; it is consistent and it has the asymptotic normality (or the stable convergence in the general case) when the sample size is large un...

  11. Further Simulation Evidence on the Performance of the Poisson Pseudo-Maximum Likelihood Estimator

    OpenAIRE

    Santos Silva, Joao; Tenreyro, Silvana

    2009-01-01

    We extend the simulation results given in Santos-Silva and Tenreyro (2006, ‘The Log of Gravity’, The Review of Economics and Statistics, 88, pp.641-658) by considering data generated as a finite mixture of gamma variates. Data generated in this way can naturally have a large proportion of zeros and is fully compatible with constant elasticity models such as the gravity equation. Our results confirm that the Poisson pseudo maximum likelihood estimator is generally well behaved.

  12. Adapted Maximum-Likelihood Gaussian Models for Numerical Optimization with Continuous EDAs

    OpenAIRE

    Bosman, Peter; Grahl, J; Thierens, D.

    2007-01-01

    This article focuses on numerical optimization with continuous Estimation-of-Distribution Algorithms (EDAs). Specifically, the focus is on the use of one of the most common and best understood probability distributions: the normal distribution. We first give an overview of the existing research on this topic. We then point out a source of inefficiency in EDAs that make use of the normal distribution with maximum-likelihood (ML) estimates. Scaling the covariance matrix beyond its ML estimate d...

  13. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    OpenAIRE

    Prix, R.; Krishnan, B.

    2009-01-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as {\\cal F} -statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('{\\cal B} -statistic') u...

  14. Super Learner and Targeted Maximum Likelihood Estimation for Longitudinal Data Structures with Applications to Atrial Fibrillation

    OpenAIRE

    Brooks, Jordan

    2012-01-01

    This thesis discusses the Super Learner and Targeted Maximum Likelihood Estimation (TMLE) for longitudinal data structures in nonparametric statistical models. It focuses specifically on time-dependent data structures where the outcome of interest may be described as a counting process. A Super Learner for the conditional intensity of the counting process is proposed based on the minimization of squared error and negative Bernoulli loglikelihood risks. An analytic comparison of the oracle ine...

  15. Impact of Land Ownership on Productivity and Efficiency of Rice Farmers: A Simulated Maximum Likelihood Approach

    OpenAIRE

    Koirala, Krishna H.; Mishra, Ashok K.; Mohanty, Samarendu

    2014-01-01

    This paper investigates the factors affecting rice production and technical efficiency of rice farmers in Philippines. Particular attention is given to the role of land ownership. We use the 2007-2012 Loop Survey from the Institute of Rice Research Institute (IRRI) and simulated maximum likelihood (SML) approach. Results show that land ownership plays an important role in rice production. In particular, compared to owner operators, farmers who lease land are less productive. Additionally, res...

  16. Maximum Likelihood Estimator For Doppler Parameter And Cramer Rao Bound In ZP-OFDM UWA Channel

    OpenAIRE

    Lyonnet, Bastien; Siclet, Cyrille; Brossier, Jean-Marc

    2010-01-01

    A Doppler estimation system using a maximum likelihood criterion is presented in the context of underwater acoustic communications between moving transmitter/receiver. We simulate the method for the estimation of the Doppler effect induced by an underwater acoustic channel (UWA) using Zero Padded-Orthogonal Frequency Division Multiplexing (ZP-OFDM). Among the wide range of physical processes that impact OFDM communications through the underwater environment, Doppler effect is an important cau...

  17. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    Science.gov (United States)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  18. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    CERN Document Server

    Hall, Alex

    2016-01-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...

  19. Fast Approximation Algorithm for Restricted Euclidean Bottleneck Steiner Tree Problem

    Directory of Open Access Journals (Sweden)

    Zimao Li

    2014-04-01

    Full Text Available Bottleneck Steiner tree problem asks to find a Steiner tree for n terminals with at most k Steiner points such that the length of the longest edge in the tree is minimized. The problem has applications in the design of wireless communication networks. In this paper we study a restricted version of the bottleneck Steiner tree problem in the Euclidean plane which requires that only degree-2 Steiner points are possibly adjacent in the optimal solution. We first show that the problem is NP-hard and cannot be approximated within unless P=NP, and provide a fast polynomial time deterministic approximation algorithm with performance ratio .

  20. A Nuclear Ribosomal DNA Phylogeny of Acer Inferred with Maximum Likelihood, Splits Graphs, and Motif Analysis of 606 Sequences

    Directory of Open Access Journals (Sweden)

    Guido W. Grimm

    2006-01-01

    Full Text Available The multi-copy internal transcribed spacer (ITS region of nuclear ribosomal DNA is widely used to infer phylogenetic relationships among closely related taxa. Here we use maximum likelihood (ML and splits graph analyses to extract phylogenetic information from ~ 600 mostly cloned ITS sequences, representing 81 species and subspecies of Acer, and both species of its sister Dipteronia. Additional analyses compared sequence motifs in Acer and several hundred Anacardiaceae, Burseraceae, Meliaceae, Rutaceae, and Sapindaceae ITS sequences in GenBank. We also assessed the effects of using smaller data sets of consensus sequences with ambiguity coding (accounting for within-species variation instead of the full (partly redundant original sequences. Neighbor-nets and bipartition networks were used to visualize conflict among character state patterns. Species clusters observed in the trees and networks largely agree with morphology-based classifications; of de Jong’s (1994 16 sections, nine are supported in neighbor-net and bipartition networks, and ten by sequence motifs and the ML tree; of his 19 series, 14 are supported in networks, motifs, and the ML tree. Most nodes had higher bootstrap support with matrices of 105 or 40 consensus sequences than with the original matrix. Within-taxon ITS divergence did not differ between diploid and polyploid Acer, and there was little evidence of differentiated parental ITS haplotypes, suggesting that concerted evolution in Acer acts rapidly.

  1. MAXIMUM LIKELIHOOD SOURCE SEPARATION FOR FINITE IMPULSE RESPONSE MULTIPLE INPUT-MULTIPLE OUTPUT CHANNELS IN THE PRESENCE OF ADDITIVE NOISE

    Institute of Scientific and Technical Information of China (English)

    Kazi Takpaya; Wei Gang

    2003-01-01

    Blind identification-blind equalization for Finite Impulse Response (FIR) Multiple Input-Multiple Output (MIMO) channels can be reformulated as the problem of blind sources separation. It has been shown that blind identification via decorrelating sub-channels method could recover the input sources. The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators, which decorrelate the output signals of subchannels, and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix. In this paper, a new approximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed. The proposed method outperforms BIDS in the presence of additive white Gaussian noise.

  2. Tree-fold loop approximation of AMD

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Akira [Tohoku Univ., Sendai (Japan). Faculty of Science

    1997-05-01

    AMD (antisymmetrized molecular dynamics) is a frame work for describing a wave function of nucleon multi-body system by Slater determinant of Gaussian wave flux, and a theory for integrally describing a wide range of nuclear reactions such as intermittent energy heavy ion reaction, nucleon incident reaction and so forth. The aim of this study is induction on approximation equation of expected value, {nu}, in correlation capable of calculation with time proportional A (exp 3) (or lower), and to make AMD applicable to the heavier system such as Au+Au. As it must be avoided to break characteristics of AMD, it needs not to be anxious only by approximating the {nu}-value. However, in order to give this approximation any meaning, error of this approximation will have to be sufficiently small in comparison with bond energy of atomic nucleus and smaller than 1 MeV/nucleon. As the absolute expected value in correlation may be larger than 50 MeV/nucleon, the approximation is required to have a high accuracy within 2 percent. (G.K.)

  3. Tree-fold loop approximation of AMD

    International Nuclear Information System (INIS)

    AMD (antisymmetrized molecular dynamics) is a frame work for describing a wave function of nucleon multi-body system by Slater determinant of Gaussian wave flux, and a theory for integrally describing a wide range of nuclear reactions such as intermittent energy heavy ion reaction, nucleon incident reaction and so forth. The aim of this study is induction on approximation equation of expected value, ν, in correlation capable of calculation with time proportional A (exp 3) (or lower), and to make AMD applicable to the heavier system such as Au+Au. As it must be avoided to break characteristics of AMD, it needs not to be anxious only by approximating the ν-value. However, in order to give this approximation any meaning, error of this approximation will have to be sufficiently small in comparison with bond energy of atomic nucleus and smaller than 1 MeV/nucleon. As the absolute expected value in correlation may be larger than 50 MeV/nucleon, the approximation is required to have a high accuracy within 2 percent. (G.K.)

  4. Tree-space statistics and approximations for large-scale analysis of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Owen, Megan; Petersen, Jens;

    2013-01-01

    parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than...... space of leaf-labeled trees. This tree-space is a geodesic metric space where any two trees are connected by a unique shortest path, which corresponds to a tree deformation. However, tree-space is not a manifold, and the usual strategy of performing statistical analysis in a tangent space and projecting...... onto tree-space is not available. Using tree-space and its shortest paths, a variety of statistical properties, such as mean, principal component, hypothesis testing and linear discriminant analysis can be defined. For some of these properties it is still an open problem how to compute them; others...

  5. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    International Nuclear Information System (INIS)

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of Rfree or may leave it out completely

  6. Limit distribution theory for maximum likelihood estimation of a log-concave density

    OpenAIRE

    Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon

    2009-01-01

    We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f0 = exp ϕ0 where ϕ0 is a concave function on ℝ. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a con...

  7. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Science.gov (United States)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  8. The convergence of object dependent resolution in maximum likelihood based tomographic image reconstruction

    International Nuclear Information System (INIS)

    Study of the maximum likelihood by EM algorithm (ML) with a reconstruction kernel equal to the intrinsic detector resolution and sieve regularization has demonstrated that any image improvements over filtered backprojection (FBP) are a function of image resolution. Comparing different reconstruction algorithms potentially requires measuring and matching the image resolution. Since there are no standard methods for describing the resolution of images from a nonlinear algorithm such as ML, the authors have defined measures of effective local Gaussian resolution (ELGR) and effective global Gaussian resolution (EGGR) and examined their behaviour in FBP images and in ML images using two different measurement techniques. (Author)

  9. Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory

    CERN Document Server

    Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O

    2014-01-01

    We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.

  10. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  11. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua;

    2015-01-01

    -free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed......-horizontal plane, MLSSL shows an average absolute DoA estimation error of 5 degrees at SNR of -5dB in a large-crowd noise and non-reverberant situation. Moreover, MLSSL suffers less from front-back confusions compared with the recent approaches....

  12. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  13. Maximum likelihood estimation based on type-i hybrid progressive censored competing risks data

    Directory of Open Access Journals (Sweden)

    Samir Ashour

    2016-03-01

    Full Text Available This paper is concerned with the estimators problems of the generalized Weibull distribution based on Type-I hybrid progressive censoring scheme (Type-I PHCS in the presence of competing risks when the cause of failure of each item is known. Maximum likelihood estimates and the corresponding Fisher information matrix are obtained. We generalized Kundu and Joarder [7] results in the case of the exponential distribution while, the corresponding results in the case of the generalized exponential and Weibull distributions may be obtained as a special cases. A real data set is used to illustrate the theoretical results.

  14. Recent developments in maximum likelihood estimation of MTMM models for categorical data

    Directory of Open Access Journals (Sweden)

    Minjeong eJeon

    2014-04-01

    Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.

  15. Maximum-Likelihood Calibration of an X-ray Computed Tomography System

    OpenAIRE

    Moore, Jared W.; Van Holen, Roel; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    We present a maximum-likelihood (ML) method for calibrating the geometrical parameters of an x-ray computed tomography (CT) system. This method makes use of the full image data and not a reduced set of data. This algorithm is particularly useful for CT systems that change their geometry during the CT acquisition, such as an adaptive CT scan. Our ML search method uses a contracting-grid algorithm that does not require initial starting values to perform its estimate, thus avoiding problems asso...

  16. Community detection in networks: Modularity optimization and maximum likelihood are equivalent

    CERN Document Server

    Newman, M E J

    2016-01-01

    We demonstrate an exact equivalence between two widely used methods of community detection in networks, the method of modularity maximization in its generalized form which incorporates a resolution parameter controlling the size of the communities discovered, and the method of maximum likelihood applied to the special case of the stochastic block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.

  17. Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.

  18. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  19. A maximum likelihood approach to estimating articulator positions from speech acoustics

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  20. Observation vs. Observable: Maximum Likelihood Estimations according to the Assumption of Generalized Gauss and Laplace Distributions

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2009-12-01

    Full Text Available Aim: The paper aims to investigate the use of maximum likelihood estimation to infer measurement types with their distribution shape. Material and Methods: A series of twenty-eight sets of observed data (different properties and activities were studied. The following analyses were applied in order to meet the aim of the research: precision, normality (Chi-square, Kolmogorov-Smirnov, and Anderson-Darling tests, the presence of outliers (Grubbs’ test, estimation of the population parameters (maximum likelihood estimation under Laplace, Gauss, and Gauss-Laplace distribution assumptions, and analysis of kurtosis (departure of sample kurtosis from the Laplace, Gauss, and Gauss-Laplace population kurtosis. Results: The mean of most investigated sets was likely to be Gauss-Laplace while the standard deviation of most investigated sets of compound was likely to be Gauss. The MLE analysis allowed making assumptions regarding the type of errors in the investigated sets. Conclusions: The proposed procedure proved to be useful in analyzing the shape of the distribution according to measurement type and generated several assumptions regarding their association.

  1. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    International Nuclear Information System (INIS)

    The fitting of data by χ2 minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. We have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. We compare this method with a χ2-minimization routine applied to both simulated and real Thomson scattering data. Differences in the returned fits are greater at low signal level (less than ∼10 counts per measurement). The maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers. copyright 1997 American Institute of Physics

  2. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    International Nuclear Information System (INIS)

    Highlights: ► We study model selection and parameter recovery from single-molecule FRET experiments. ► We examine the maximum likelihood-based analysis of two-color photon trajectories. ► The number of observed photons determines the performance of the method. ► For long trajectories, one can extract mean dwell times that are comparable to inter-photon times. -- Abstract: When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Förster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  3. Acceleration of maximum likelihood reconstruction, using frequency amplification and attenuation compensation

    International Nuclear Information System (INIS)

    Algorithms that calculate maximum likelihood (ML) and maximum a posteriori solutions using expectation-maximization have been successfully applied to SPECT and PET. These algorithms are appealing because of their solid theoretical basis and their guaranteed convergence. A major drawback is the slow convergence, which results in long processing times. This paper presents two new heuristic acceleration methods for maximum likelihood reconstruction of ECT images. The first method incorporates a frequency-dependent amplification in the calculations, to compensate for the low pass filtering of the back projection operation. In the second method, an amplification factor is incorporated that suppresses the effect of attenuation on the updating factors. Both methods are compared to the one-dimensional line search method proposed by Lewitt. All three methods accelerate the ML algorithm. On the test images, Lewitt's method produced the strongest acceleration of the three individual methods. However, the combination of the frequency amplification with the line search method results in a new algorithm with still better performance. Under certain conditions, an effective frequency amplification can be already achieved by skipping some of the calculations required for ML

  4. Varied applications of a new maximum-likelihood code with complete covariance capability

    International Nuclear Information System (INIS)

    Applications of a new data-adjustment code are given. The method is based on a maximum-likelihood extension of generalized least-squares methods that allow complete covariance descriptions for the input data and the final adjusted data evaluations. The maximum-likelihood approach is used with a generalized log-normal distribution that provides a way to treat problems with large uncertainties and that circumvents the problem of negative values that can occur for physically positive quantities. The computer code, FERRET, is written to enable the user to apply it to a large variety of problems by modifying only the input subroutine. The following applications are discussed: A 75-group a priori damage function is adjusted by as much as a factor of two by use of 14 integral measurements in different reactor spectra. Reactor spectra and dosimeter cross sections are simultaneously adjusted on the basis of both integral measurements and experimental proton-recoil spectra. The simultaneous use of measured reaction rates, measured worths, microscopic measurements, and theoretical models are used to evaluate dosimeter and fission-product cross sections. Applications in the data reduction of neutron cross section measurements and in the evaluation of reactor after-heat are also considered. 6 figures

  5. Generalized Maximum likelihood Algorithm for Direction-of-Arrival Estimation of Coherent Sources

    Institute of Scientific and Technical Information of China (English)

    WANG Bu-hong; WANG Yong-liang; CHEN Hui; GUO Ying

    2006-01-01

    The generalized maximum likelihood (GML)algorithm for direction-of-arrival estimation is proposed.Firstly,a new data model is established based on generalized steering vectors and generalized army manifold matrix.The GML algorithm is then formulated in detail.It is flexible in the sense that the arriving sources may be a mixture of multiclusters of coherent sources,the array geometry is unrestricted,and the number of sources resolved can be larger than the number of sensors.Secondly,the comparison between the GML algorithm and the conventional deterministic maximum likelihood (DML) algorithm is presented based on their respective geometrical interpretation.Subsequently,the estimation consistency of GML is proved,and the estimation variance of GML is derived.It is concluded that the performance of the GML algorithm coincides with that of the DML algorithm in the incoherent sources' case,while it improves greatly in the coherent source case.By using genetic algorithm,GML is realized,and the simulation results illustrate its improved performance compared with DML,especially in the case of multiclusters of coherent sources.

  6. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    International Nuclear Information System (INIS)

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed

  7. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  8. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  9. Tree-space statistics and approximations for large-scale analysis of anatomical trees.

    Science.gov (United States)

    Feragen, Aasa; Owen, Megan; Petersen, Jens; Wille, Mathilde M W; Thomsen, Laura H; Dirksen, Asger; de Bruijne, Marleen

    2013-01-01

    Statistical analysis of anatomical trees is hard to perform due to differences in the topological structure of the trees. In this paper we define statistical properties of leaf-labeled anatomical trees with geometric edge attributes by considering the anatomical trees as points in the geometric space of leaf-labeled trees. This tree-space is a geodesic metric space where any two trees are connected by a unique shortest path, which corresponds to a tree deformation. However, tree-space is not a manifold, and the usual strategy of performing statistical analysis in a tangent space and projecting onto tree-space is not available. Using tree-space and its shortest paths, a variety of statistical properties, such as mean, principal component, hypothesis testing and linear discriminant analysis can be defined. For some of these properties it is still an open problem how to compute them; others (like the mean) can be computed, but efficient alternatives are helpful in speeding up algorithms that use means iteratively, like hypothesis testing. In this paper, we take advantage of a very large dataset (N = 8016) to obtain computable approximations, under the assumption that the data trees parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than healthy ones. Software is available from http://image.diku.dk/aasa/software.php. PMID:24683959

  10. Galaxy and Mass Assembly (GAMA): maximum likelihood determination of the luminosity function and its evolution

    CERN Document Server

    Loveday, J; Baldry, I K; Bland-Hawthorn, J; Brough, S; Brown, M J I; Driver, S P; Kelvin, L S; Phillipps, S

    2015-01-01

    We describe modifications to the joint stepwise maximum likelihood method of Cole (2011) in order to simultaneously fit the GAMA-II galaxy luminosity function (LF), corrected for radial density variations, and its evolution with redshift. The whole sample is reasonably well-fit with luminosity (Qe) and density (Pe) evolution parameters Qe, Pe = 1.0, 1.0 but with significant degeneracies characterized by Qe = 1.4 - 0.4Pe. Blue galaxies exhibit larger luminosity density evolution than red galaxies, as expected. We present the evolution-corrected r-band LF for the whole sample and for blue and red sub-samples, using both Petrosian and Sersic magnitudes. Petrosian magnitudes miss a substantial fraction of the flux of de Vaucouleurs profile galaxies: the Sersic LF is substantially higher than the Petrosian LF at the bright end.

  11. Consistency of the Maximum Likelihood Estimator for general hidden Markov models

    CERN Document Server

    Douc, Randal; Olsson, Jimmy; Van Handel, Ramon

    2009-01-01

    Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the maximum likelihood estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for $V$-uniformly ergodic Markov chains.

  12. A New Maximum-Likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales

    CERN Document Server

    Ahlers, Markus; Desiati, Paolo; Díaz-Vélez, Juan Carlos; Fiorino, Daniel W; Westerhoff, Stefan

    2016-01-01

    The arrival directions of TeV-PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.

  13. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions.

    Science.gov (United States)

    Barrett, Harrison H; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  14. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    Science.gov (United States)

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.

  15. Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J;

    2007-01-01

    selection coefficient for optimal codon usage (S), allowing joint maximum likelihood estimation of S and the dN/dS ratio. We apply the method to previously published data from Drosophila melanogaster, Drosophila simulans, and Drosophila yakuba and show, in accordance with previous results, that the D....... melanogaster lineage has experienced a reduction in the selection for optimal codon usage. However, the D. melanogaster lineage has also experienced a change in the biological mutation rates relative to D. simulans, in particular, a relative reduction in the mutation rate from A to G and an increase in the...... mutation rate from C to T. However, neither a reduction in the strength of selection nor a change in the mutational pattern can alone explain all of the data observed in the D. melanogaster lineage. For example, we also confirm previous results showing that the Notch locus has experienced positive...

  16. Comparison of the least squares and the maximum likelihood estimators for gamma-spectrometry

    International Nuclear Information System (INIS)

    A comparison of the characteristics of the maximum likelihood (ML) and the least squares (LS) estimators of nuclides activities for low-intensity scintillation γ-spectra has been carried out by the computer simulation. It has been found that the part of the LS estimators gives biased activity estimates and the bias grows with increase of the multichannel analyzer resolution (the number of the spectrum channels). Such bias in estimates leads to the significant deterioration of the estimation accuracy for low-intensity spectra. Consequently, the threshold of nuclides detection rises up to 2-10 times in comparison with the ML estimator. It has been shown that the ML estimator and the special LS estimator provide non biased estimates of nuclides activities. Thus, these estimators are optimal for practical application to low-intensity spectrometry. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)

  17. Application of maximum likelihood reconstruction of subaperture data for measurement of large flat mirrors

    International Nuclear Information System (INIS)

    Interferometers accurately measure the difference between two wavefronts, one from a reference surface and the other from an unknown surface. If the reference surface is near perfect or is accurately known from some other test, then the shape of the unknown surface can be determined. We investigate the case where neither the reference surface nor the surface under test is well known. By making multiple shear measurements where both surfaces are translated and/or rotated, we obtain sufficient information to reconstruct the figure of both surfaces with a maximum likelihood reconstruction method. The method is demonstrated for the measurement of a 1.6 m flat mirror to 2 nm rms, using a smaller reference mirror that had significant figure error.

  18. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    International Nuclear Information System (INIS)

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  19. Noise propagation in SPECT images reconstructed using an iterative maximum-likelihood algorithm

    International Nuclear Information System (INIS)

    The effects of photon noise in the emission projection data and uncertainty in the attenuation map on the image noise in attenuation-corrected SPECT images reconstructed using a maximum-likelihood expectation-maximization algorithm were investigated. Emission projection data of a physical Hoffman brain phantom and a thorax-like phantom were acquired from a prototype emission-transmission computed tomography (ETCT) scanner being developed at UCSF (University of California at San Francisco). Computer-simulated emission projection data from a head-like phantom and a thorax-like phantom were also obtained using a fan-beam geometry consistent with the ETCT system. The results are expected to be generally applicable to other emission-transmission systems, including those using external radionuclide sources for the acquisition of attenuation maps. (author)

  20. Using Maximum Likelihood analysis in HBT interferometry: bin-free treatment of correlated errors

    International Nuclear Information System (INIS)

    We present a new procedure, based on the Maximum Likelihood Method, for fitting the space-time size parameters of the particle production region in ultra-relativistic heavy ion collisions. This procedure offers two significant advantages: 1) it does not require sorting of the correlation data into arbitrary bins in the multidimensional momentum space and 2) it applies all available information on the experimental resolution error matrix separately to each correlated particle multiplet analyzed. These features permit extraction of maximum information from the data. The technique may be particularly important in ultra-relativistic heavy ion collisions, because in this energy domain large source radii and long source lifetimes are expected, and high-multiplicity HBT interferometry with a single collision event is a possibility. ((orig.))

  1. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    CERN Document Server

    Bian, Liheng; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for error removal. Results on both simulated data and real data captured using our laser FPM setup show that the proposed...

  2. A CODEBOOK COMPENSATIVE VOICE MORPHING ALGORITHM BASED ON MAXIMUM LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Xu Ning; Yang Zhen; Zhang Linhua

    2009-01-01

    This paper presents an improved voice morphing algorithm based on Gaussian Mixture Model (GMM) which overcomes the traditional one in the terms of overly smoothed problems of the converted spectral and discontinuities between frames.Firstly,a maximum likelihood estimation for the model is introduced for the alleviation of the inversion of high dimension matrixes caused by traditional conversion function.Then,in order to resolve the two problems associated with the baseline,a codebook compensation technique and a time domain medial filter are applied.The results of listening evaluations show that the quality of the speech converted by the proposed method is significantly better than that by the traditional GMM method,and the Mean Opinion Score (MOS) of the converted speech is improved from 2.5 to 3.1 and ABX score from 38% to 75%.

  3. A maximum likelihood analysis of the CoGeNT public dataset

    Science.gov (United States)

    Kelso, Chris

    2016-06-01

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.

  4. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual...... integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross...

  5. Using maximum likelihood method to detect adaptive evolution of HCV envelope protein-coding genes

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wenjuan; ZHANG Yuan; ZHONG Yang

    2006-01-01

    Nonsynonymous-synonymous substitution rate ratio (dN/dS) is an important measure for evaluating selective pressure based on the protein-coding sequences. Maximum likelihood (ML) method with codon-substitution models is a powerful statistic tool for detecting amino acid sites under positive selection and adaptive evolution. We analyzed the hepatitis C virus (HCV) envelope protein-coding sequences from 18 general geno/ subtypes worldwide, and found 4 amino acid sites under positive selection. Since these sites are located in different immune epitopes, it is reasonable to anticipate that our study would have potential values in biomedicine. It also suggests that the ML method is an effective way to detect adaptive evolution in virus proteins with relatively high genetic diversity.

  6. Blind deconvolution of quantum-limited incoherent imagery: maximum-likelihood approach.

    Science.gov (United States)

    Holmes, T J

    1992-07-01

    Previous research presented by the author and others into maximum-likelihood image restoration for incoherent imagery is extended to consider problems of blind deconvolution in which the impulse response of the system is assumed to be unknown. Potential applications that motivate this study are wide-field and confocal fluorescence microscopy, although applications in astronomy and infrared imaging are foreseen as well. The methodology incorporates the iterative expectation-maximization algorithm. Although the precise impulse response is assumed to be unknown, some prior knowledge about characteristics of the impulse response is used. In preliminary simulation studies that are presented, the circular symmetry and the band-limited nature of the impulse response are used as such. These simulations demonstrate the potential utility and present limitations of these methods. PMID:1634965

  7. A New Maximum-likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales

    Science.gov (United States)

    Ahlers, M.; BenZvi, S. Y.; Desiati, P.; Díaz–Vélez, J. C.; Fiorino, D. W.; Westerhoff, S.

    2016-05-01

    The arrival directions of TeV–PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic-ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic-ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic-ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.

  8. Maximum likelihood q-estimator reveals nonextensivity regulated by extracellular potassium in the mammalian neuromuscular junction

    CERN Document Server

    da Silva, A J; Santos, D O C; Lima, R F

    2013-01-01

    Recently, we demonstrated the existence of nonextensivity in neuromuscular transmission [Phys. Rev. E 84, 041925 (2011)]. In the present letter, we propose a general criterion based on the q-calculus foundations and nonextensive statistics to estimate the values for both scale factor and q-index using the maximum likelihood q-estimation method (MLqE). We next applied our theoretical findings to electrophysiological recordings from neuromuscular junction (NMJ) where spontaneous miniature end plate potentials (MEPP) were analyzed. These calculations were performed in both normal and high extracellular potassium concentration, [K+]o. This protocol was assumed to test the validity of the q-index in electrophysiological conditions closely resembling physiological stimuli. Surprisingly, the analysis showed a significant difference between the q-index in high and normal [K+]o, where the magnitude of nonextensivity was increased. Our letter provides a general way to obtain the best q-index from the q-Gaussian distrib...

  9. An extended-source spatial acquisition process based on maximum likelihood criterion for planetary optical communications

    Science.gov (United States)

    Yan, Tsun-Yee

    1992-01-01

    This paper describes an extended-source spatial acquisition process based on the maximum likelihood criterion for interplanetary optical communications. The objective is to use the sun-lit Earth image as a receiver beacon and point the transmitter laser to the Earth-based receiver to establish a communication path. The process assumes the existence of a reference image. The uncertainties between the reference image and the received image are modeled as additive white Gaussian disturbances. It has been shown that the optimal spatial acquisition requires solving two nonlinear equations to estimate the coordinates of the transceiver from the received camera image in the transformed domain. The optimal solution can be obtained iteratively by solving two linear equations. Numerical results using a sample sun-lit Earth as a reference image demonstrate that sub-pixel resolutions can be achieved in a high disturbance environment. Spatial resolution is quantified by Cramer-Rao lower bounds.

  10. Implementation of non-linear filters for iterative penalized maximum likelihood image reconstruction

    International Nuclear Information System (INIS)

    In this paper, the authors report on the implementation of six edge-preserving, noise-smoothing, non-linear filters applied in image space for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The non-linear smoothing filters implemented were the median filter, the E6 filter, the sigma filter, the edge-line filter, the gradient-inverse filter, and the 3-point edge filter with gradient-inverse filter, and the 3-point edge filter with gradient-inverse weight. A 3 x 3 window was used for all these filters. The best image obtained, by viewing the profiles through the image in terms of noise-smoothing, edge-sharpening, and contrast, was the one smoothed with the 3-point edge filter. The computation time for the smoothing was less than 1% of one iteration, and the memory space for the smoothing was negligible. These images were compared with the results obtained using Bayesian analysis

  11. Parallelization of maximum likelihood fits with OpenMP and CUDA

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; Pantaleo, F

    2011-01-01

    Data analyses based on maximum likelihood fits are commonly used in the high energy physics community for fitting statistical models to data samples. This technique requires the numerical minimization of the negative log-likelihood function. MINUIT is the most common package used for this purpose in the high energy physics community. The main algorithm in this package, MIGRAD, searches the minimum by using the gradient information. The procedure requires several evaluations of the function, depending on the number of free parameters and their initial values. The whole procedure can be very CPU-time consuming in case of complex functions, with several free parameters, many independent variables and large data samples. Therefore, it becomes particularly important to speed-up the evaluation of the negative log-likelihood function. In this paper we present an algorithm and its implementation which benefits from data vectorization and parallelization (based on OpenMP) and which was also ported to Graphics Processi...

  12. Maximum Likelihood Foreground Cleaning for Cosmic Microwave Background Polarimeters in the Presence of Systematic Effects

    CERN Document Server

    Bao, Chaoyun; Gold, Ben; Hanany, Shaul; Jaffe, Andrew; Stompor, Radek

    2015-01-01

    We extend a general maximum likelihood foreground estimation for cosmic microwave background polarization data to include estimation of instrumental systematic effects. We focus on two particular effects: frequency band measurement uncertainty, and instrumentally induced frequency dependent polarization rotation. We assess the bias induced on the estimation of the $B$-mode polarization signal by these two systematic effects in the presence of instrumental noise and uncertainties in the polarization and spectral index of Galactic dust. Degeneracies between uncertainties in the band and polarization angle calibration measurements and in the dust spectral index and polarization increase the uncertainty in the extracted CMB $B$-mode power, and may give rise to a biased estimate. We provide a quantitative assessment of the potential bias and increased uncertainty in an example experimental configuration. For example, we find that with 10\\% polarized dust, tensor to scalar ratio of $r=0.05$, and the instrumental co...

  13. Maximum Likelihood Bayesian Averaging of Spatial Variability Models in Unsaturated Fractured Tuff

    International Nuclear Information System (INIS)

    Hydrologic analyses typically rely on a single conceptual-mathematical model. Yet hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Adopting only one of these may lead to statistical bias and underestimation of uncertainty. Bayesian Model Averaging (BMA) provides an optimal way to combine the predictions of several competing models and to assess their joint predictive uncertainty. However, it tends to be computationally demanding and relies heavily on prior information about model parameters. We apply a maximum likelihood (ML) version of BMA (MLBMA) to seven alternative variogram models of log air permeability data from single-hole pneumatic injection tests in six boreholes at the Apache Leap Research Site (ALRS) in central Arizona. Unbiased ML estimates of variogram and drift parameters are obtained using Adjoint State Maximum Likelihood Cross Validation in conjunction with Universal Kriging and Generalized L east Squares. Standard information criteria provide an ambiguous ranking of the models, which does not justify selecting one of them and discarding all others as is commonly done in practice. Instead, we eliminate some of the models based on their negligibly small posterior probabilities and use the rest to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. We then average these four projections, and associated kriging variances, using the posterior probability of each model as weight. Finally, we cross-validate the results by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of MLBMA with that of each individual model. We find that MLBMA is superior to any individual geostatistical model of log permeability among those we consider at the ALRS

  14. Supervised maximum-likelihood weighting of composite protein networks for complex prediction

    Directory of Open Access Journals (Sweden)

    Yong Chern Han

    2012-12-01

    Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to

  15. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  16. Comparison between artificial neural networks and maximum likelihood classification in digital soil mapping

    Directory of Open Access Journals (Sweden)

    César da Silva Chagas

    2013-04-01

    Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.

  17. Investigation of optimal parameters for penalized maximum-likelihood reconstruction applied to iodinated contrast-enhanced breast CT

    Science.gov (United States)

    Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.

    2016-03-01

    Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.

  18. ROC [Receiver Operating Characteristics] study of maximum likelihood estimator human brain image reconstructions in PET [Positron Emission Tomography] clinical practice

    International Nuclear Information System (INIS)

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab

  19. ROC (Receiver Operating Characteristics) study of maximum likelihood estimator human brain image reconstructions in PET (Positron Emission Tomography) clinical practice

    Energy Technology Data Exchange (ETDEWEB)

    Llacer, J.; Veklerov, E.; Nolan, D. (Lawrence Berkeley Lab., CA (USA)); Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J. (California Univ., Los Angeles, CA (USA))

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of {sup 18}F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab.

  20. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    Science.gov (United States)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  1. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  2. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    Science.gov (United States)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  3. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  4. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    Science.gov (United States)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  5. Stability of maximum-likelihood-based clustering methods: exploring the backbone of classifications

    International Nuclear Information System (INIS)

    Components of complex systems are often classified according to the way they interact with each other. In graph theory such groups are known as clusters or communities. Many different techniques have been recently proposed to detect them, some of which involve inference methods using either Bayesian or maximum likelihood approaches. In this paper, we study a statistical model designed for detecting clusters based on connection similarity. The basic assumption of the model is that the graph was generated by a certain grouping of the nodes and an expectation maximization algorithm is employed to infer that grouping. We show that the method admits further development to yield a stability analysis of the groupings that quantifies the extent to which each node influences its neighbors' group membership. Our approach naturally allows for the identification of the key elements responsible for the grouping and their resilience to changes in the network. Given the generality of the assumptions underlying the statistical model, such nodes are likely to play special roles in the original system. We illustrate this point by analyzing several empirical networks for which further information about the properties of the nodes is available. The search and identification of stabilizing nodes constitutes thus a novel technique to characterize the relevance of nodes in complex networks

  6. Decision Feedback Partial Response Maximum Likelihood for Super-Resolution Media

    Science.gov (United States)

    Kasahara, Ryosuke; Ogata, Tetsuya; Kawasaki, Toshiyuki; Miura, Hiroshi; Yokoi, Kenya

    2007-06-01

    A decision feedback partial response maximum likelihood (PRML) for super-resolution media was developed. Decision feedback is used to compensate for nonlinear distortion in the readout signals of super-resolution media, making it possible to compensate for long-bit nonlinear distortion in small circuits. An field programmable gate array (FPGA) was fabricated with a decision feedback PRML, and a real-time bit error rate (bER) measuring system was developed. As a result, a bER of 4× 10-5 was achieved with an actual readout signal at the double density of a Blu-ray disc converted to the optical properties of the experimental setup using a red-laser system. Also, a bER of 1.5× 10-5 was achieved at double the density of an a high definition digital versatile disc read-only memory (HD DVD-ROM), and the radial and tangential tilt margins were measured in a blue-laser system.

  7. Maximum-likelihood constrained regularized algorithms: an objective criterion for the determination of regularization parameters

    Science.gov (United States)

    Lanteri, Henri; Roche, Muriel; Cuevas, Olga; Aime, Claude

    1999-12-01

    We propose regularized versions of Maximum Likelihood algorithms for Poisson process with non-negativity constraint. For such process, the best-known (non- regularized) algorithm is that of Richardson-Lucy, extensively used for astronomical applications. Regularization is necessary to prevent an amplification of the noise during the iterative reconstruction; this can be done either by limiting the iteration number or by introducing a penalty term. In this Communication, we focus our attention on the explicit regularization using Tikhonov (Identity and Laplacian operator) or entropy terms (Kullback-Leibler and Csiszar divergences). The algorithms are established from the Kuhn-Tucker first order optimality conditions for the minimization of the Lagrange function and from the method of successive substitutions. The algorithms may be written in a `product form'. Numerical illustrations are given for simulated images corrupted by photon noise. The effects of the regularization are shown in the Fourier plane. The tests we have made indicate that a noticeable improvement of the results may be obtained for some of these explicitly regularized algorithms. We also show that a comparison with a Wiener filter can give the optimal regularizing conditions (operator and strength).

  8. Extended Maximum Likelihood Halo-independent Analysis of Dark Matter Direct Detection Data

    CERN Document Server

    Gelmini, Graciela B; Gondolo, Paolo; Huh, Ji-Haeng

    2015-01-01

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio $f_n/f_p=-0.7$, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with $f_n/f_p=-0.8$.

  9. Activation detection in functional MRI using subspace modeling and maximum likelihood estimation.

    Science.gov (United States)

    Ardekani, B A; Kershaw, J; Kashikura, K; Kanno, I

    1999-02-01

    A statistical method for detecting activated pixels in functional MRI (fMIRI) data is presented. In this method, the fMRI time series measured at each pixel is modeled as the sum of a response signal which arises due to the experimentally controlled activation-baseline pattern, a nuisance component representing effects of no interest, and Gaussian white noise. For periodic activation-baseline patterns, the response signal is modeled by a truncated Fourier series with a known fundamental frequency but unknown Fourier coefficients. The nuisance subspace is assumed to be unknown. A maximum likelihood estimate is derived for the component of the nuisance subspace which is orthogonal to the response signal subspace. An estimate for the order of the nuisance subspace is obtained from an information theoretic criterion. A statistical test is derived and shown to be the uniformly most powerful (UMP) test invariant to a group of transformations which are natural to the hypothesis testing problem. The maximal invariant statistic used in this test has an F distribution. The theoretical F distribution under the null hypothesis strongly concurred with the experimental frequency distribution obtained by performing null experiments in which the subjects did not perform any activation task. Application of the theory to motor activation and visual stimulation fMRI studies is presented. PMID:10232667

  10. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  11. Application of Artificial Bee Colony Algorithm to Maximum Likelihood DOA Estimation

    Institute of Scientific and Technical Information of China (English)

    Zhicheng Zhang; Jun Lin; Yaowu Shi

    2013-01-01

    Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation,but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use.To reduce the high computational burden of ML method and make it more suitable to engineering applications,we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation.As a recently proposed bio-inspired computing algorithm,ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment.It offers an excellent alternative to the conventional methods in ML-DOA estimation.The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence,Signal-to-Noise Ratio (SNR),and number of iterations.The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated.Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.

  12. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  13. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO-2, an axial component CO3-3, as well as four isotropic components, three of which could be attributed to SO-2, a tumbling CO-2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, DE. The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  14. Statistical bounds and maximum likelihood performance for shot noise limited knife-edge modeled stellar occultation

    Science.gov (United States)

    McNicholl, Patrick J.; Crabtree, Peter N.

    2014-09-01

    Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.

  15. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  16. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    International Nuclear Information System (INIS)

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome

  17. Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-motion

    Directory of Open Access Journals (Sweden)

    Huajun Liu

    2016-01-01

    Full Text Available This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.

  18. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  19. Optimization of the Maximum Likelihood Estimator for Determining the Intrinsic Dimensionality of High–Dimensional Data

    Directory of Open Access Journals (Sweden)

    Karbauskaitė Rasa

    2015-12-01

    Full Text Available One of the problems in the analysis of the set of images of a moving object is to evaluate the degree of freedom of motion and the angle of rotation. Here the intrinsic dimensionality of multidimensional data, characterizing the set of images, can be used. Usually, the image may be represented by a high-dimensional point whose dimensionality depends on the number of pixels in the image. The knowledge of the intrinsic dimensionality of a data set is very useful information in exploratory data analysis, because it is possible to reduce the dimensionality of the data without losing much information. In this paper, the maximum likelihood estimator (MLE of the intrinsic dimensionality is explored experimentally. In contrast to the previous works, the radius of a hypersphere, which covers neighbours of the analysed points, is fixed instead of the number of the nearest neighbours in the MLE. A way of choosing the radius in this method is proposed. We explore which metric—Euclidean or geodesic—must be evaluated in the MLE algorithm in order to get the true estimate of the intrinsic dimensionality. The MLE method is examined using a number of artificial and real (images data sets.

  20. Estimation of intake by maximum likelihood method using follow up measurements of 131I thyroidal burden

    International Nuclear Information System (INIS)

    131I is widely used for diagnostic and therapeutic purposes. Since thyroid is the main deposition site for 131I, so it can be detected by direct thyroid monitoring. This work presents results of follow-up measurements of an individual who was internally contaminated with 131I with injected activity being determined by maximum likelihood method. Importance of dose per unit content is also shown in this study. The whole body monitoring system available in Radiation Safety Systems Division of Bhabha Atomic Research Centre is calibrated for estimation of 131I in the thyroid of radiation worker using BOttle MAnnikin Absorber (BOMAB) phantom with neck part being replaced by American National Standards Institute (ANSI)/International Atomic Energy Agency (IAEA) neck. The estimated intake was found to be 89.24 kBq and the committed effective dose is calculated as 1.96 mSv. The data are analyzed with autocorrelation and Chi-square test to establish goodness of fit with log-normal distribution. The overestimation of thyroid activity by use of mid axial hole in BOMAB phantom is removed by using ANSI/IAEA neck phantom. Measured retained thyroidal data on different days following the intake has closely fitted to the ICRP predicted retained activity. (author)

  1. The direct calculation of parametric images from dynamic PET data using maximum-likelihood iterative reconstruction

    International Nuclear Information System (INIS)

    The aim of this work is to calculate, directly from projection data, concise images characterizing the spatial and temporal distribution of labelled compounds from dynamic PET data. Conventionally, image reconstruction and the calculation of parametric images are performed sequentially. By combining the two processes, low-noise parametric images are obtained, using a computationally feasible parametric iterative reconstruction (Pir) algorithm. Pir is performed by restricting the pixel time - activity curves to a positive linear sum of predefined time characteristics. The weights in this sum are then calculated directly from the PET projection data, using an iterative algorithm based on a maximum-likelihood iterative algorithm commonly used for tomographic reconstruction. The ability of the algorithm to extract known kinetic components from the raw data is assessed, using data from both a phantom experiment and clinical studies. The calculated parametric images indicate differential kinetic behaviour and have been used to aid in the identification of tissues which exhibit differences in the handling of labelled compounds. These parametric images should be helpful in defining regions of interest with similar functional behaviour, and with Fag Pakatal analysis. (author)

  2. Normalized Maximum Likelihood Coding for Exponential Family with Its Applications to Optimal Clustering

    CERN Document Server

    Hirai, So

    2012-01-01

    We are concerned with the issue of how to calculate the normalized maximum likelihood~(NML) code-length. There is a problem that the normalization term of the NML code-length may diverge when it is continuous and unbounded and a straightforward computation of it is highly expensive when the data domain is finite . In previous works it has been investigated how to calculate the NML code-length for specific types of distributions. We first propose a general method for computing the NML code-length for the exponential family. Then we specifically focus on Gaussian mixture model~(GMM), and propose a new efficient method for computing the NML to them. We develop it by generalizing Rissanen's re-normalizing technique. Then we apply this method to the clustering issue, in which a clustering structure is modeled using a GMM, and the main task is to estimate the optimal number of clusters on the basis of the NML code-length. We demonstrate using artificial data sets the superiority of the NML-based clustering over oth...

  3. Land Use Classification using Support Vector Machine and Maximum Likelihood Algorithms by Landsat 5 TM Images

    Directory of Open Access Journals (Sweden)

    Abbas TAATI

    2015-08-01

    Full Text Available Nowadays, remote sensing images have been identified and exploited as the latest information to study land cover and land uses. These digital images are of significant importance, since they can present timely information, and capable of providing land use maps. The aim of this study is to create land use classification using a support vector machine (SVM and maximum likelihood classifier (MLC in Qazvin, Iran, by TM images of the Landsat 5 satellite. In the pre-processing stage, the necessary corrections were applied to the images. In order to evaluate the accuracy of the 2 algorithms, the overall accuracy and kappa coefficient were used. The evaluation results verified that the SVM algorithm with an overall accuracy of 86.67 % and a kappa coefficient of 0.82 has a higher accuracy than the MLC algorithm in land use mapping. Therefore, this algorithm has been suggested to be applied as an optimal classifier for extraction of land use maps due to its higher accuracy and better consistency within the study area.

  4. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  5. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  6. Rayleigh-maximum-likelihood filtering for speckle reduction of ultrasound images.

    Science.gov (United States)

    Aysal, Tuncer C; Barner, Kenneth E

    2007-05-01

    Speckle is a multiplicative noise that degrades ultrasound images. Recent advancements in ultrasound instrumentation and portable ultrasound devices necessitate the need for more robust despeckling techniques, for both routine clinical practice and teleconsultation. Methods previously proposed for speckle reduction suffer from two major limitations: 1) noise attenuation is not sufficient, especially in the smooth and background areas; 2) existing methods do not sufficiently preserve or enhance edges--they only inhibit smoothing near edges. In this paper, we propose a novel technique that is capable of reducing the speckle more effectively than previous methods and jointly enhancing the edge information, rather than just inhibiting smoothing. The proposed method utilizes the Rayleigh distribution to model the speckle and adopts the robust maximum-likelihood estimation approach. The resulting estimator is statistically analyzed through first and second moment derivations. A tuning parameter that naturally evolves in the estimation equation is analyzed, and an adaptive method utilizing the instantaneous coefficient of variation is proposed to adjust this parameter. To further tailor performance, a weighted version of the proposed estimator is introduced to exploit varying statistics of input samples. Finally, the proposed method is evaluated and compared to well-accepted methods through simulations utilizing synthetic and real ultrasound data. PMID:17518065

  7. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    Science.gov (United States)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  8. Performance of MIMO-OFDM system using Linear Maximum Likelihood Alamouti Decoder

    Directory of Open Access Journals (Sweden)

    Monika Aggarwal

    2012-06-01

    Full Text Available A MIMO-OFDM wireless communication system is a combination of MIMO and OFDM Technology. The combination of MIMO and OFDM produces a powerful technique for providing high data rates over frequency-selective fading channels. MIMO-OFDM system has been currently recognized as one of the most competitive technology for 4G mobile wireless systems. MIMO-OFDM system can compensate for the lacks of MIMO systems and give play to the advantages of OFDM system.In this paper , the bit error rate (BER performance using linear maximum likelihood alamouti combiner (LMLAC decoding technique for space time frequency block codes(STFBC MIMO-OFDM system with frequency offset (FO is being evaluated to provide the system with low complexity and maximum diversity. The simulation results showed that the scheme has the ability to reduce ICI effectively with a low decoding complexity and maximum diversity in terms of bandwidth efficiency and also in the bit error rate (BER performance especially at high signal to noise ratio.

  9. Intra-Die Spatial Correlation Extraction with Maximum Likelihood Estimation Method for Multiple Test Chips

    Science.gov (United States)

    Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei

    In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.

  10. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    Energy Technology Data Exchange (ETDEWEB)

    Gelmini, Graciela B.; Georgescu, Andreea [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA, 90095 (United States); Gondolo, Paolo [Department of Physics and Astronomy, University of Utah,115 South 1400 East #201, Salt Lake City, UT, 84112 (United States); Huh, Ji-Haeng [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA, 90095 (United States)

    2015-11-24

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio f{sub n}/f{sub p}=−0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f{sub n}/f{sub p}=−0.8.

  11. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    International Nuclear Information System (INIS)

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio fn/fp=−0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with fn/fp=−0.8

  12. Penalized maximum-likelihood sinogram restoration for dual focal spot computed tomography

    International Nuclear Information System (INIS)

    Due to various system non-idealities, the raw data generated by a computed tomography (CT) machine are not readily usable for reconstruction. Although the deterministic nature of corruption effects such as crosstalk and afterglow permits correction by deconvolution, there is a drawback because deconvolution usually amplifies noise. Methods that perform raw data correction combined with noise suppression are commonly termed sinogram restoration methods. The need for sinogram restoration arises, for example, when photon counts are low and non-statistical reconstruction algorithms such as filtered backprojection are used. Many modern CT machines offer a dual focal spot (DFS) mode, which serves the goal of increased radial sampling by alternating the focal spot between two positions on the anode plate during the scan. Although the focal spot mode does not play a role with respect to how the data are affected by the above-mentioned corruption effects, it needs to be taken into account if regularized sinogram restoration is to be applied to the data. This work points out the subtle difference in processing that sinogram restoration for DFS requires, how it is correctly employed within the penalized maximum-likelihood sinogram restoration algorithm and what impact it has on image quality

  13. Penalized maximum-likelihood sinogram restoration for dual focal spot computed tomography.

    Science.gov (United States)

    Forthmann, P; Köhler, T; Begemann, P G C; Defrise, M

    2007-08-01

    Due to various system non-idealities, the raw data generated by a computed tomography (CT) machine are not readily usable for reconstruction. Although the deterministic nature of corruption effects such as crosstalk and afterglow permits correction by deconvolution, there is a drawback because deconvolution usually amplifies noise. Methods that perform raw data correction combined with noise suppression are commonly termed sinogram restoration methods. The need for sinogram restoration arises, for example, when photon counts are low and non-statistical reconstruction algorithms such as filtered backprojection are used. Many modern CT machines offer a dual focal spot (DFS) mode, which serves the goal of increased radial sampling by alternating the focal spot between two positions on the anode plate during the scan. Although the focal spot mode does not play a role with respect to how the data are affected by the above-mentioned corruption effects, it needs to be taken into account if regularized sinogram restoration is to be applied to the data. This work points out the subtle difference in processing that sinogram restoration for DFS requires, how it is correctly employed within the penalized maximum-likelihood sinogram restoration algorithm and what impact it has on image quality. PMID:17634647

  14. Forest encroachment mapping in Baratang Island, India, using maximum likelihood and support vector machine classifiers

    Science.gov (United States)

    Tiwari, Laxmi Kant; Sinha, Satish K.; Saran, Sameer; Tolpekin, Valentyn A.; Raju, Penumetcha L. N.

    2016-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVMs) are commonly used supervised classification methods in remote sensing applications. MLC is a parametric method, whereas SVM is a nonparametric method. In an environmental application, a hybrid scheme is designed to identify forest encroachment (FE) pockets by classifying medium-resolution remote sensing images with SVM, incorporating knowledge-base and GPS readings in the geographical information system. The classification scheme has enabled us to identify small scattered noncontiguous FE pockets supported by ground truthing. On Baratang Island, the detected FE area from the classified thematic map for the year 2003 was ˜202 ha, and for the year 2013, the encroachment was ˜206 ha. While some of the older FE pockets were vacated, new FE pockets appeared in the area. Furthermore, comparisons of different classification results in terms of Z-statistics indicate that linear SVM is superior to MLC, whereas linear and nonlinear SVM are not significantly different. Accuracy assessment shows that SVM-based classification results have higher accuracy than MLC-based results. Statistical accuracy in terms of kappa values achieved for the linear SVM-classified thematic maps for the years 2003 and 2013 is 0.98 and 1.0, respectively.

  15. Parameter-free bearing fault detection based on maximum likelihood estimation and differentiation

    International Nuclear Information System (INIS)

    Bearing faults can lead to malfunction and ultimately complete stall of many machines. The conventional high-frequency resonance (HFR) method has been commonly used for bearing fault detection. However, it is often very difficult to obtain and calibrate bandpass filter parameters, i.e. the center frequency and bandwidth, the key to the success of the HFR method. This inevitably undermines the usefulness of the conventional HFR technique. To avoid such difficulties, we propose parameter-free, versatile yet straightforward techniques to detect bearing faults. We focus on two types of measured signals frequently encountered in practice: (1) a mixture of impulsive faulty bearing vibrations and intrinsic background noise and (2) impulsive faulty bearing vibrations blended with intrinsic background noise and vibration interferences. To design a proper signal processing technique for each case, we analyze the effects of intrinsic background noise and vibration interferences on amplitude demodulation. For the first case, a maximum likelihood-based fault detection method is proposed to accommodate the Rician distribution of the amplitude-demodulated signal mixture. For the second case, we first illustrate that the high-amplitude low-frequency vibration interferences can make the amplitude demodulation ineffective. Then we propose a differentiation method to enhance the fault detectability. It is shown that the iterative application of a differentiation step can boost the relative strength of the impulsive faulty bearing signal component with respect to the vibration interferences. This preserves the effectiveness of amplitude demodulation and hence leads to more accurate fault detection. The proposed approaches are evaluated on simulated signals and experimental data acquired from faulty bearings

  16. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    International Nuclear Information System (INIS)

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated

  17. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    Science.gov (United States)

    Stsepankou, D.; Arns, A.; Ng, S. K.; Zygmanski, P.; Hesser, J.

    2012-10-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone-beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system.

  18. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    Science.gov (United States)

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  19. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  20. Predicting bulk permeability using outcrop fracture attributes: The benefits of a Maximum Likelihood Estimator

    Science.gov (United States)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2015-12-01

    The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.

  1. Total variation penalized maximum-likelihood image reconstruction for a stationary small animal SPECT system

    International Nuclear Information System (INIS)

    We have developed a 3D total variation (TV) penalized maximum likelihood (ML) image reconstruction method and tested it in simulated dynamic SPECT scans using a stationary ring-type SPECT insert for simultaneous small animal SPECTMR imaging. The SPECT insert consists of 5 (axial) x 19 (transaxial) MR-compatible CZT detectors that form a seamless 19-side detector ring, inside which a cylindrical collimator sleeve with 36 focused pinholes is inserted for dynamic SPECT acquisition. The short duration of the individual time frames and the stationary data acquisition nature may cause severe noise and sparse-view artifacts in the reconstructed images, and as a result affect the time-activity curve (TAC) derived from the 4D image sequence. The TV potential function favors piece-wise constant image reconstruction therefore is capable of reducing these image artifacts. Our implementation of the ML-TV method used the Douglas-Rachford splitting to deal with the non-smooth TV function. We applied the ML-TV method to a computer-simulated dynamic mouse renal SPECT scan, and evaluated the method in terms of pixel-wise TAC estimation compared to the conventional ML-EM. The pixel-wise TAC obtained by the MLEM method exhibited large fluctuation around the truth; this large fluctuation was significantly suppressed by using ML-TV. Our next step is to incorporate time-domain correlation and develop fully 4D (3D spatial + 1D time) image reconstruction methods for dynamic stationary small animal SPECT studies. (orig.)

  2. Total variation penalized maximum-likelihood image reconstruction for a stationary small animal SPECT system

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jingyan; Tsui, Benjamin M.W. [Johns Hopkins Univ., Baltimore, MD (United States). Div. of Medical Imaging Physics; Chen, Si [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Electrical and Computer Engineering

    2011-07-01

    We have developed a 3D total variation (TV) penalized maximum likelihood (ML) image reconstruction method and tested it in simulated dynamic SPECT scans using a stationary ring-type SPECT insert for simultaneous small animal SPECTMR imaging. The SPECT insert consists of 5 (axial) x 19 (transaxial) MR-compatible CZT detectors that form a seamless 19-side detector ring, inside which a cylindrical collimator sleeve with 36 focused pinholes is inserted for dynamic SPECT acquisition. The short duration of the individual time frames and the stationary data acquisition nature may cause severe noise and sparse-view artifacts in the reconstructed images, and as a result affect the time-activity curve (TAC) derived from the 4D image sequence. The TV potential function favors piece-wise constant image reconstruction therefore is capable of reducing these image artifacts. Our implementation of the ML-TV method used the Douglas-Rachford splitting to deal with the non-smooth TV function. We applied the ML-TV method to a computer-simulated dynamic mouse renal SPECT scan, and evaluated the method in terms of pixel-wise TAC estimation compared to the conventional ML-EM. The pixel-wise TAC obtained by the MLEM method exhibited large fluctuation around the truth; this large fluctuation was significantly suppressed by using ML-TV. Our next step is to incorporate time-domain correlation and develop fully 4D (3D spatial + 1D time) image reconstruction methods for dynamic stationary small animal SPECT studies. (orig.)

  3. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-10-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  4. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  5. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  6. Approximation Algorithms for Optimal Decision Trees and Adaptive TSP Problems

    CERN Document Server

    Gupta, Anupam; Nagarajan, Viswanath; Ravi, R

    2010-01-01

    We consider the problem of constructing optimal decision trees: given a collection of tests which can disambiguate between a set of $m$ possible diseases, each test having a cost, and the a-priori likelihood of the patient having any particular disease, what is a good adaptive strategy to perform these tests to minimize the expected cost to identify the disease? We settle the approximability of this problem by giving a tight $O(\\log m)$-approximation algorithm. We also consider a more substantial generalization, the Adaptive TSP problem. Given an underlying metric space, a random subset $S$ of cities is drawn from a known distribution, but $S$ is initially unknown to us--we get information about whether any city is in $S$ only when we visit the city in question. What is a good adaptive way of visiting all the cities in the random subset $S$ while minimizing the expected distance traveled? For this problem, we give the first poly-logarithmic approximation, and show that this algorithm is best possible unless w...

  7. Distributions of the Maximum Likelihood and Minimum Contrast Estimators Associated with the Fractional Ornstein-Uhlenbeck Process

    OpenAIRE

    Tanaka, Katsuto

    2011-01-01

    We discuss some inference problems associated with the fractional Ornstein-Uhlenbeck (fO-U) process driven by the fractional Brownian motion (fBm). In particular, we are concerned with the estimation of the drift parameter, assuming that the Hurst parameter H is known and is in [1/2, 1). Under this setting we compute the distributions of the maximum likelihood estimator (MLE) and the minimum contrast estimator (MCE) for the drift parameter, and explore their distributional properties by payin...

  8. Multi-Level Restricted Maximum Likelihood Covariance Estimation and Kriging for Large Non-Gridded Spatial Datasets

    OpenAIRE

    Castrillon-Candas, Julio E.; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on th...

  9. New Confidence Intervals and Bias Comparisons Show that Maximum Likelihood Can Beat Multiple Imputation in Small Samples

    OpenAIRE

    von Hippel, Paul T.

    2013-01-01

    When analyzing incomplete data, is it better to use multiple imputation (MI) or full information maximum likelihood (ML)? In large samples ML is clearly better, but in small samples ML's usefulness has been limited because ML commonly uses normal test statistics and confidence intervals that require large samples. We propose small-sample t-based ML confidence intervals that have good coverage and are shorter than t-based confidence intervals under MI. We also show that ML point estimates are ...

  10. A Maximum Likelihood Estimator based on First Differences for a Panel Data Tobit Model with Individual Specific Effects

    OpenAIRE

    A.S. Kalwij

    2000-01-01

    This paper proposes an alternative estimation procedure for a panel data Tobit model with individual specific effects based on taking first differences of the equation of interest. This helps to alleviate the sensitivity of the estimates to a specific parameterization of the individual specific effects and some Monte Carlo evidence is provided in support of this. To allow for arbitrary serial correlation estimation takes place in two steps: Maximum Likelihood is applied to each pair of consec...

  11. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at lo...... algorithms can immediately be applied also to other subspace problems such as frequency estimation of sinusoids in noise or direction of arrival estimation with uniform linear arrays....

  12. Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)

    OpenAIRE

    Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther

    2013-01-01

    The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private muta...

  13. Complexes of block copolymers in solution: tree approximation

    NARCIS (Netherlands)

    Geurts, Bernard J.; Damme, van Ruud

    1989-01-01

    We determine the statistical properties of block copolymer complexes in solution. These complexes are assumed to have the topological structure of (i) a tree or of (ii) a line-dressed tree. In case the structure is that of a tree, the system is shown to undergo a gelation transition at sufficiently

  14. The statistical analysis of dilution series by maximum likelihood: an application to in vitro bioassays estimating the potency of the diphteria component in vaccines by serology

    NARCIS (Netherlands)

    Slob W; Hendriksen CFM

    1989-01-01

    Dit rapport bespreekt de analyse van verdunningsreeksen met maximum likelihood, met als toepassing het in vitro serologisch toetsen van de werkzaamheid van bacteriele vaccins voor de mens. Met computersimulaties wordt aangetoond dat de maximum likelihood methode adequaat is voor de in het werkzaamh

  15. A Fast Algorithm for Maximum Likelihood-based Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    complex-valued data over all frequencies on a Fourier grid and up to a maximum model order. The proposed algorithm significantly reduces the computational complexity to a level not far from the complexity of the popular harmonic summation method which is an approximate ML estimator....

  16. Maximum Likelihood Analysis of Neutron Beta Decay Observables to Resolve the Limits of the V-A Law

    CERN Document Server

    Gardner, S

    2013-01-01

    We assess the ability of future neutron beta decay measurements of up to O(10^{-4}) precision to falsify the standard model, particularly the V-A law, and to identify the dynamics beyond it. To do this, we employ a maximum likelihood statistical framework which incorporates both experimental and theoretical uncertainties. Using illustrative combined global fits to Monte Carlo pseudodata, we also quantify the importance of experimental measurements of the energy dependence of the angular correlation coefficients as input to such efforts, and we determine the precision to which ill-known "second-class" hadronic matrix elements must be determined in order to exact such tests.

  17. APPLICATION OF A GENERALIZED MAXIMUM LIKELIHOOD METHOD IN THE REDUCTION OF MULTICOMPONENT LIQUID-LIQUID EQUILIBRIUM DATA

    Directory of Open Access Journals (Sweden)

    L. STRAGEVITCH

    1997-03-01

    Full Text Available The equations of the method based on the maximum likelihood principle have been rewritten in a suitable generalized form to allow the use of any number of implicit constraints in the determination of model parameters from experimental data and from the associated experimental uncertainties. In addition to the use of any number of constraints, this method also allows data, with different numbers of constraints, to be reduced simultaneously. Application of the method is illustrated in the reduction of liquid-liquid equilibrium data of binary, ternary and quaternary systems simultaneously

  18. A maximum likelihood direction of arrival estimation method for open-sphere microphone arrays in the spherical harmonic domain.

    Science.gov (United States)

    Hu, Yuxiang; Lu, Jing; Qiu, Xiaojun

    2015-08-01

    Open-sphere microphone arrays are preferred over rigid-sphere arrays when minimal interaction between array and the measured sound field is required. However, open-sphere arrays suffer from poor robustness at null frequencies of the spherical Bessel function. This letter proposes a maximum likelihood method for direction of arrival estimation in the spherical harmonic domain, which avoids the division of the spherical Bessel function and can be used at arbitrary frequencies. Furthermore, the method can be easily extended to wideband implementation. Simulation and experiment results demonstrate the superiority of the proposed method over the commonly used methods in open-sphere configurations. PMID:26328695

  19. Maximum likelihood estimation of the negative binomial dispersion parameter for highly overdispersed data, with applications to infectious diseases.

    Directory of Open Access Journals (Sweden)

    James O Lloyd-Smith

    Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.

  20. Maximum Likelihood Estimator for Bearings-only Passive Target Tracking in Electronic Surveillance Measure and Electronic Warfare Systems

    Directory of Open Access Journals (Sweden)

    S. Koteswara Rao

    2010-03-01

    Full Text Available Maximum likelihood estimator is a suitable algorithm for passive target tracking applications. Nardone, Lindgren and Gong introduced this approach using batch processing. In this paper, the batch processing is converted into sequential processing for real-time applications like passive target tracking using bearings-only measurements. Adaptively, the variance of each measurement is computed and is used along with the measurement in such a way that the effect of false bearings can be reduced. The transmissions made by radar on a target ship are assumed to be intercepted by an electronic warfare (EW system of own ship. The generated bearings in intercept mode are processed through maximum likelihood estimator (MLE to find out target motion parameters. Instead of assuming some arbitrary values, pseudo linear estimator outputs are used for the initialisation of MLE. The algorithm is tested in Monte-Carlo simulation and its results are presented for two typical scenarios.Defence Science Journal, 2010, 60(2, pp.197-203, DOI:http://dx.doi.org/10.14429/dsj.60.340

  1. TopREML: a topological restricted maximum likelihood approach to regionalize trended runoff signatures in stream networks

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2015-06-01

    We introduce topological restricted maximum likelihood (TopREML) as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross-validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. The ability of TopREML to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable via remote-sensing technology.

  2. A flexible decision-aided maximum likelihood phase estimation in hybrid QPSK/OOK coherent optical WDM systems

    Science.gov (United States)

    Zhang, Yong; Wang, Yulong

    2016-04-01

    Although decision-aided (DA) maximum likelihood (ML) phase estimation (PE) algorithm has been investigated intensively, block length effect impacts system performance and leads to the increasing of hardware complexity. In this paper, a flexible DA-ML algorithm is proposed in hybrid QPSK/OOK coherent optical wavelength division multiplexed (WDM) systems. We present a general cross phase modulation (XPM) model based on Volterra series transfer function (VSTF) method to describe XPM effects induced by OOK channels at the end of dispersion management (DM) fiber links. Based on our model, the weighted factors obtained from maximum likelihood method are introduced to eliminate the block length effect. We derive the analytical expression of phase error variance for the performance prediction of coherent receiver with the flexible DA-ML algorithm. Bit error ratio (BER) performance is evaluated and compared through both theoretical derivation and Monte Carlo (MC) simulation. The results show that our flexible DA-ML algorithm has significant improvement in performance compared with the conventional DA-ML algorithm as block length is a fixed value. Compared with the conventional DA-ML with optimum block length, our flexible DA-ML can obtain better system performance. It means our flexible DA-ML algorithm is more effective for mitigating phase noise than conventional DA-ML algorithm.

  3. Resolution and signal-to-noise ratio improvement in confocal fluorescence microscopy using array detection and maximum-likelihood processing

    Science.gov (United States)

    Kakade, Rohan; Walker, John G.; Phillips, Andrew J.

    2016-08-01

    Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.

  4. A hardware architecture using finite-field arithmetic for computing maximum-likelihood estimates in emission tomography

    International Nuclear Information System (INIS)

    A special-purpose hardware architecture is proposed to implement the expectation-maximization algorithm to compute, in clinically useful times, the maximum-likelihood estimate of a radionuclide distribution for a positron-emission tomogram having time-of-flight measurements. Two-dimensional convolutions required for forming the estimate are converted into a series of one-dimensional convolutions which can be evaluated in parallel. Each one-dimensional convolution is evaluated using a number-theoretic transform. All numerical calculations are performed using finite-field arithmetic. In order to avoid the use of large finite fields and to increase parallelism, each convolution is performed by a series of convolutions with small digits in a Galois field

  5. Cross validation and maximum likelihood estimations of hyper-parameters of Gaussian processes with model mis-specification

    International Nuclear Information System (INIS)

    The Maximum Likelihood (ML) and Cross Validation (CV) methods for estimating covariance hyper-parameters are compared, in the context of Kriging with a mis-specified covariance structure. A two-step approach is used. First, the case of the estimation of a single variance hyper-parameter is addressed, for which the fixed correlation function is mis-specified. A predictive variance based quality criterion is introduced and a closed-form expression of this criterion is derived. It is shown that when the correlation function is mis-specified, the CV does better compared to ML, while ML is optimal when the model is well-specified. In the second step, the results of the first step are extended to the case when the hyper-parameters of the correlation function are also estimated from data. (author)

  6. MAXIMUM LIKELIHOOD SOURCE SEPARATION FOR FINITE IMPULSE RESPONSE MULTIPLE INPUT—MULTIPLE OUTPUT CHANNELS IN THE PRESENCE OF ADDITIVE NOISE

    Institute of Scientific and Technical Information of China (English)

    AaziTakpaya; WeiGang

    2003-01-01

    Blind identification-blind equalization for finite Impulse Response(FIR)Multiple Input-Multiple Output(MIMO)channels can be reformulated as the problem of blind sources separation.It has been shown that blind identification via decorrelating sub-channels method could recover the input sources.The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators,which decorrelate the output signals of subchannels,and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix.In this paper,a new qpproximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed.The proposed method outperforms BIDS in the presence of additive white Garssian noise.

  7. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    International Nuclear Information System (INIS)

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field

  8. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  9. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    Science.gov (United States)

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-12-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  10. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  11. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillón-Candás, Julio E.

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  12. Deterministic approximation for the cover time of trees

    CERN Document Server

    Feige, Uriel

    2009-01-01

    We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n/epsilon, and hence remains polynomial in n also for epsilon = 1/n^{O(1)}. We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.

  13. The statistical analysis of dilution series by maximum likelihood: an application to in vitro bioassays estimating the potency of the diphteria component in vaccines by serology

    OpenAIRE

    Slob W; Hendriksen CFM

    1989-01-01

    Dit rapport bespreekt de analyse van verdunningsreeksen met maximum likelihood, met als toepassing het in vitro serologisch toetsen van de werkzaamheid van bacteriele vaccins voor de mens. Met computersimulaties wordt aangetoond dat de maximum likelihood methode adequaat is voor de in het werkzaamheidsonderzoek gebruikelijke steekproefomvang. De relatie tussen de antitoxine respons en vaccinverdunning wordt goed beschreven met een rechte lijn op dubbele log-schaal binnen de gebruikelijke expe...

  14. An approximate version of the Tree Packing Conjecture

    Czech Academy of Sciences Publication Activity Database

    Böttcher, J.; Hladký, Jan; Piguet, Diana; Taraz, A.

    2016-01-01

    Roč. 211, č. 1 (2016), s. 391-446. ISSN 0021-2172 Institutional support: RVO:67985840 ; RVO:67985807 Keywords : Ringel's conjecture * Gyarfas-Lehel conjecture * Tree packing Subject RIV: BA - General Mathematics Impact factor: 0.787, year: 2014 http://link.springer.com/article/10.1007%2Fs11856-015-1277-2

  15. Tree expectation propagation for ml decoding of LDPC codes over the BEC

    OpenAIRE

    Salamanca Mino, Luis; Olmos, P. M.; Murillo-Fuentes, J. J.; Perez-Cruz, F

    2013-01-01

    We propose a decoding algorithm for LDPC codes that achieves the maximum likelihood (ML) solution over the bi- nary erasure channel (BEC). In this channel, the tree-structured expectation propagation (TEP) decoder improves the peeling decoder (PD) by processing check nodes of degree one and two. However, it does not achieve the ML solution, as the tree structure of the TEP allows only for approximate inference. In this paper, we provide the procedure to construct the structure needed for exac...

  16. Approximating Probability Densities by Mixtures of Gaussian Dependence Trees

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří

    Praha: ČVUT, 2014. ISBN 978-80-01-05616-5. [Stochastic and Physical Monitoring Systems SPMS 2014. Malá Skála (CZ), 23.06.2014-28.06.2014] R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : Multivariate statistics * Mixtures of dependence trees * EM algorithm * Pattern recognition * Medical image analysis Subject RIV: IN - Informatics, Computer Science http://library.utia.cas.cz/separaty/2014/RO/grim-0435901.pdf

  17. The Complexity of Computing Graph-Approximating Spanning Trees

    OpenAIRE

    Matthias Baumgart; Hanjo Täubig

    2012-01-01

    This paper deals with the problem of computing a spanning tree of a connected undirected graph G=(V,E) minimizing the sum of distance differences of all vertex pairs u,vn V which are connected by an edge {u,v}n E. We show that the decision variant of this optimization problem is NP-complete with respect to the L_p norm for arbitrary pn N. For the reduction, we use the well known NP-complete problem Vertex Cover.

  18. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  19. Decision-aided maximum likelihood phase estimation with optimum block length in hybrid QPSK/16QAM coherent optical WDM systems

    Science.gov (United States)

    Zhang, Yong; Wang, Yulong

    2016-01-01

    We propose a general model to entirely describe XPM effects induced by 16QAM channels in hybrid QPSK/16QAM wavelength division multiplexed (WDM) systems. A power spectral density (PSD) formula is presented to predict the statistical properties of XPM effects at the end of dispersion management (DM) fiber links. We derive the analytical expression of phase error variance for optimizing block length of QPSK channel coherent receiver with decision-aided (DA) maximum-likelihood (ML) phase estimation (PE). With our theoretical analysis, the optimum block length can be employed to improve the performance of coherent receiver. Bit error rate (BER) performance in QPSK channel is evaluated and compared through both theoretical derivation and Monte Carlo simulation. The results show that by using the DA-ML with optimum block length, bit signal-to-noise ratio (SNR) improvement over DA-ML with fixed block length of 10, 20 and 40 at BER of 10-3 is 0.18 dB, 0.46 dB and 0.65 dB, respectively, when in-line residual dispersion is 0 ps/nm.

  20. Separating components of variation in measurement series using maximum likelihood estimation. Application to patient position data in radiotherapy

    International Nuclear Information System (INIS)

    Maximum likelihood estimation (MLE) is presented as a statistical tool to evaluate the contribution of measurement error to any measurement series where the same quantity is measured using different independent methods. The technique was tested against artificial data sets; generated for values of underlying variation in the quantity and measurement error between 0.5 mm and 3 mm. In each case the simulation parameters were determined within 0.1 mm. The technique was applied to analyzing external random positioning errors from positional audit data for 112 pelvic radiotherapy patients. Patient position offsets were measured using portal imaging analysis and external body surface measures. Using MLE to analyze all methods in parallel it was possible to ascertain the measurement error for each method and the underlying positional variation. In the (AP / Lat / SI) directions the standard deviations of the measured patient position errors from portal imaging were (3.3 mm / 2.3 mm / 1.9 mm), arising from underlying variations of (2.7 mm / 1.5 mm / 1.4 mm) and measurement uncertainties of (1.8 mm / 1.8 mm / 1.3 mm), respectively. The measurement errors agree well with published studies. MLE used in this manner could be applied to any study in which the same quantity is measured using independent methods. (paper)

  1. Photopeak shape function: Formulation based on stochastic event analysis and parameter estimation by the maximum-likelihood estimation method

    International Nuclear Information System (INIS)

    A theoretical model to describe the photopeak shape function has been developed by introducing an instrument function, which is a convolution of the statistical fluctuation of the charge carriers and the stochastic process of escape of the charge-carrier collection by capture at trapping centers. The photopeak shape function is a convolution of the instrument function and a Poisson probability-density functional representation of a reduced random summing event. The functions have been tested by using three coaxial, high-purity Ge detectors of a conventional type. The parameters were estimated by the maximum-likelihood estimation method. The position indicating the incident photon energy appeared at the centroid of the intrinsic normal distribution. The most probable peak-height position is no more than a ''conventional'' one, though it is commonly used in spectroscopy. The theory predicts the photopeak shape of many photons by folding an input function of the subject. The theory provides standards for the detector and the detection system. (orig.)

  2. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  3. Small signal with background: objective confidence intervals and regions for physical parameters from the principle of maximum likelihood

    International Nuclear Information System (INIS)

    It is argued that the choice between one-sided and two-sided confidence intervals must be made according to a rule prior to and independent of the data and it is shown that such a rule was found in principle by a statistician about half a century ago. The novel problem with unphysical estimates of a parameter in presence of background is solved in the realm of classical statistics by applying this rule and the principle of maximum likelihood. Optimal confidence intervals are given for the measurement of a bounded magnitude with normal errors, most effective in discriminating a signal next to the bound, and it is shown how to get them in any single case for a bounded discrete variable with background, in general and specifically for Poisson and binomial variables, with two examples of application. The upper limit provided by this method, when the data are consistent with no signal, does not decrease with unphysical estimates going far off the physical values, so removing the last claimed support of Bayesian inference in physics. Procedure are given extending the method to several parameters

  4. MLE [Maximum Likelihood Estimator] reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    International Nuclear Information System (INIS)

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data

  5. Use of a Bayesian maximum-likelihood classifier to generate training data for brain-machine interfaces

    Science.gov (United States)

    Ludwig, Kip A.; Miriani, Rachel M.; Langhals, Nicholas B.; Marzullo, Timothy C.; Kipke, Daryl R.

    2011-08-01

    Brain-machine interface decoding algorithms need to be predicated on assumptions that are easily met outside of an experimental setting to enable a practical clinical device. Given present technological limitations, there is a need for decoding algorithms which (a) are not dependent upon a large number of neurons for control, (b) are adaptable to alternative sources of neuronal input such as local field potentials (LFPs), and (c) require only marginal training data for daily calibrations. Moreover, practical algorithms must recognize when the user is not intending to generate a control output and eliminate poor training data. In this paper, we introduce and evaluate a Bayesian maximum-likelihood estimation strategy to address the issues of isolating quality training data and self-paced control. Six animal subjects demonstrate that a multiple state classification task, loosely based on the standard center-out task, can be accomplished with fewer than five engaged neurons while requiring less than ten trials for algorithm training. In addition, untrained animals quickly obtained accurate device control, utilizing LFPs as well as neurons in cingulate cortex, two non-traditional neural inputs.

  6. A topological restricted maximum likelihood (TopREML approach to regionalize trended runoff signatures in stream networks

    Directory of Open Access Journals (Sweden)

    M. F. Müller

    2015-01-01

    Full Text Available We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML framework generates the best linear unbiased predictor (BLUP of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged and Austria (densely gauged, where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  7. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  8. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi

    2014-10-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.

  9. A topological restricted maximum likelihood (TopREML) approach to regionalize trended runoff signatures in stream networks

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2015-01-01

    We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  10. Using trees to compute approximate solutions to ordinary differential equations exactly

    Science.gov (United States)

    Grossman, Robert

    1991-01-01

    Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.

  11. A Factor 3/2 Approximation for Generalized Steiner Tree Problem with Distances One and Two

    CERN Document Server

    Berman, Piotr; Zelikovsky, Alex

    2008-01-01

    We design a 3/2 approximation algorithm for the Generalized Steiner Tree problem (GST) in metrics with distances 1 and 2. This is the first polynomial time approximation algorithm for a wide class of non-geometric metric GST instances with approximation factor below 2.

  12. Experimental demonstration of a digital maximum likelihood based feedforward carrier recovery scheme for phase-modulated radio-over-fibre links

    DEFF Research Database (Denmark)

    Guerrero Gonzalez, Neil; Zibar, Darko; Yu, Xianbin;

    2008-01-01

    Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission.......Maximum likelihood based feedforward RF carrier synchronization scheme is proposed for a coherently detected phase-modulated radio-over-fiber link. Error-free demodulation of 100 Mbit/s QPSK modulated signal is experimentally demonstrated after 25 km of fiber transmission....

  13. Maximum-likelihood estimates of the frequency and other parameters of signals of laser Doppler measuring systems operating in the one-particle-scattering mode

    International Nuclear Information System (INIS)

    Maximum-likelihood equations are presented for estimates of the Doppler frequency (speed) and other unknown parameters of signals of laser Doppler anemometers and lidars operating in the one-particle-scattering mode. Shot noise was assumed to be the main interfering factor of the problem. The error correlation matrix was calculated and the Rao - Cramer bounds were determined. The results are confirmed by the computer simulation of the Doppler signal and the numerical solution of the maximum-likelihood equations for the Doppler frequency. The obtained estimate is unbiased, and its dispersion coincides with the Rao-Cramer bound. (laser applications and other topics in quantum electronics)

  14. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  15. Evaluating the maximum likelihood method for detecting short-term variability of AGILE γ-ray sources

    Science.gov (United States)

    Bulgarelli, A.; Chen, A. W.; Tavani, M.; Gianotti, F.; Trifoglio, M.; Contessi, T.

    2012-04-01

    Context. The AGILE space mission (whose instrument is sensitive to the energy ranges 18-60 keV, and 30 MeV-50 GeV) has been operating since 2007. Assessing the statistical significance of the time variability of γ-ray sources above 100 MeV is a primary task of the AGILE data analysis. In particular, it is important to verify the instrument sensitivity in terms of Poisson modeling of the data background, and to determine the post-trial confidence of detections. Aims: The goals of this work are: (i) to evaluate the distributions of the likelihood ratio test for both "empty" fields and regions of the Galactic plane, and (ii) to calculate the probability of false detections over multiple time intervals. Methods: We describe in detail the techniques used to search for short-term variability in the AGILE γ-ray source database. We describe the binned maximum likelihood method used for the analysis of AGILE data, and the numerical simulations that support the characterization of the statistical analysis. We apply our method to both Galactic and extragalactic transients, and provide a few examples. Results: After checking the reliability of the statistical description tested with the real AGILE data, we obtain the distribution of p-values for blind and specific source searches. We apply our results to the determination of the post-trial statistical significance of detections of transient γ-ray sources in terms of pre-trial values. Conclusions: The results of our analysis allow a precise determination of the post-trial significance of γ-ray sources detected by AGILE.

  16. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    Science.gov (United States)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  17. Trends in morphological evolution in homobasidiomycetes inferred using maximum likelihood: a comparison of binary and multistate approaches.

    Science.gov (United States)

    Hibbett, David

    2004-12-01

    The homobasidiomycetes is a diverse group of macrofungi that includes mushrooms, puffballs, coral fungi, and other forms. This study used maximum likelihood methods to determine if there are general trends (evolutionary tendencies) in the evolution of fruiting body forms in homobasidiomycetes, and to estimate the ancestral forms of the homobasidiomycetes and euagarics clade. Character evolution was modeled using a published 481-species phylogeny under two character-coding regimes: additive binary coding, using DISCRETE, and multistate (five-state) coding, using MULTISTATE. Inferences regarding trends in character evolution made under binary coding were often in conflict with those made under multistate coding, suggesting that the additive binary coding approach cannot serve as a surrogate for multistate methods. MULTISTATE was used to develop a"minimal"model of fruiting body evolution, in which the 20 parameters that specify rates of transformations among character states were grouped into the fewest possible rate categories. The minimal model required only four rate categories, one of which is approaching zero, and suggests the following conclusions regarding trends in evolution of homobasidiomycete fruiting bodies: (1) there is an active trend favoring the evolution of pileate-stipitate forms (those with a cap and stalk); (2) the hypothesis that the evolution of gasteroid forms (those with internal spore production, such as puffballs) is irreversible cannot be rejected; and (3) crustlike resupinate forms are not a particularly labile morphology. The latter finding contradicts the conclusions of a previous study that used binary character coding. Ancestral state reconstructions under binary coding suggest that the ancestor of the homobasidiomycetes was resupinate and the ancestor of the euagarics clade was pileate-stipitate, but ancestral state reconstructions under multistate coding did not resolve the ancestral form of either node. The results of this study

  18. Evaluation of median filtering after reconstruction with maximum likelihood expectation maximization (ML-EM) by real space and frequency space

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Keiichi; Fujita, Toru; Oogari, Koji [Kyoto Univ. (Japan). Hospital

    2002-05-01

    Maximum likelihood expectation maximization (ML-EM) image quality is sensitive to the number of iterations, because a large number of iterations leads to images with checkerboard noise. The use of median filtering in the reconstruction process allows both noise reduction and edge preservation. We examined the value of median filtering after reconstruction with ML-EM by comparing filtered back projection (FBP) with a ramp filter or ML-EM without filtering. SPECT images were obtained with a dual-head gamma camera. The acquisition time was changed from 10 to 200 (seconds/frame) to examine the effect of the count statistics on the quality of the reconstructed images. First, images were reconstructed with ML-EM by changing the number of iterations from 1 to 150 in each study. Additionally, median filtering was applied following reconstruction with ML-EM. The quality of the reconstructed images was evaluated in terms of normalized mean square error (NMSE) values and two-dimensional power spectrum analysis. Median filtering after reconstruction by the ML-EM method provided stable NMSE values even when the number of iterations was increased. The signal element of the image was close to the reference image for any repetition number of iterations. Median filtering after reconstruction with ML-EM was useful in reducing noise, with a similar resolution achieved by reconstruction with FBP and a ramp filter. Especially in images with poor count statistics, median filtering after reconstruction with ML-EM is effective as a simple, widely available method. (author)

  19. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect. PMID:26758232

  20. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  1. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm

    Science.gov (United States)

    Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  2. Estimation of intake by maximum likelihood method using folIow-up measurements of 131I thyroidal burden

    International Nuclear Information System (INIS)

    131I is a short-lived radionuclide (T1/2 =8.04d) which decays by beta emission producing significant yields of photons of energies 0.364 MeV (82%) and 0.637 MeV (7%). In the present investigations, follow-up measurements were made by in-vivo monitoring technique for an individual who was internally contaminated with 131I due to injection. The measurements were carried out on seven different occasions to find retention profile of thyroid activity. The intake was estimated by using maximum likelihood method (MLM) assuming that the measurement data to be log-normally distributed (LND). The CF of WBM estimated from ANIP was 453.48 cpm/kBq at a distance of 20 cm. Intake computed from MLM was 89.3 kBq for injection route and the committed effective dose was 1.96 mSv. The auto correlation and chi-square test was performed with a scattering factor value of 1.2 as suggested by ICRP. The z value of autocorrelation test was obtained as 0.1304 which corresponds to P value of 0.449. Chi-square value obtained was 1.93 which corresponds to P value of 0.85, which was more than chosen level of significance (0.05) implying the adequacy of fit. This explains that the measurement data follows a LND. The scattering factor of log normal distribution can be estimated from follow-up measurement of such accidental scenarios. The paper presents the follow up monitoring data of thyroidal burden of an individual and gives a methodology for estimating the normalized Intake/CED using MLM and tested by statistical parameters. Measured retained thyroidal data on different days following the intake has closely fitted to the ICRP predicted retained activity

  3. Improving soil moisture profile prediction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    Science.gov (United States)

    Tran, A. P.; Vanclooster, M.; Lambot, S.

    2013-02-01

    The vertical profile of root zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR) data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach by a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Increasing update interval from 5 to 50 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  4. Improving soil moisture profile reconstruction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    Science.gov (United States)

    Tran, A. P.; Vanclooster, M.; Lambot, S.

    2013-07-01

    The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR) data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  5. Improving soil moisture profile reconstruction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    Directory of Open Access Journals (Sweden)

    A. P. Tran

    2013-07-01

    Full Text Available The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  6. Improving soil moisture profile prediction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    Directory of Open Access Journals (Sweden)

    S. Lambot

    2013-02-01

    Full Text Available The vertical profile of root zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach by a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Increasing update interval from 5 to 50 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  7. Approximation Algorithms for Optimization Problems in Graphs with Super logarithmic Tree width

    DEFF Research Database (Denmark)

    Czumaj, Artur; Halldorsson, Marcús Mar; Lingas, Andrzej;

    2005-01-01

    We present a generic scheme for approximating NP-hard problems on graphs of treewidth k=ω(logn) . When a tree-decomposition of width ℓ is given, the scheme typically yields an ℓ/logn -approximation factor; otherwise, an extra logk factor is incurred. Our method applies to several basic subgraph a...

  8. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  9. Assessment of maximum likelihood (ML) and artificial neural network (ANN) algorithms for classification of remote sensing data

    Science.gov (United States)

    Gupta, R. K.; Prasad, T. S.; Vijayan, D.; Balamanikavelu, P. M.

    Due to mix-up of contributions from varied features on the ground surface, getting back of individual feature in remote sensing data using pattern recognition techniques is an ill-defined inverse problem. By placing maximum likelihood (ML) constraint, the available operational softwares classify the image. Without placing any parametric constraint, the image could also be classified using artificial neural networks (ANN). As GIS overlay, developed professionally by forest officials, was available for Antilova reserve forest in Andhra Pradesh, India (170 50^' to 170 56^' N, 810 45^' to 810 54^' E), the IRS-1C LISS-III image of February 11, 1999 was used for assessing the limits of classification accuracy attainable from ML and ANN classifiers. In ML classifier, full GIS overlay was used to give training sets over whole of the image (approach `a') and in approach `b', a priori probability (normally taken equal for all the classes in operational softwares) was assigned (in addition to full spectral signature) based on the fraction areas under each class in GIS overlay. Under such ideal situation of inputs, the achieved accuracy, i.e. Kappa coefficients were 0.709 and 0.735 for approaches `a' and `b' , respectively (called iteration `0'). Using fraction area under each class in the classified output to assign a priori probability for the next iteration, the convergence (within 2% variation) was achieved for 2nd and 3rd iterations with Kappa coefficient values of 0.773 and 0.797 for approaches `a' and `b', respectively. The non-attaining of 100% classification accuracy under ideal inputs situation could be due to assumption of guassian distribution in spectral signatures. In back propagation technique based ANN classifier, spectral signatures for training were identified from GIS overlay. The number of learning iterations were 20,000 with momentum and learning rate of 0.7 and 0.25, respectively. With one hidden layer the Kappa coefficient for ANN classifier was 0

  10. A maximum likelihood QTL analysis reveals common genome regions controlling resistance to Salmonella colonization and carrier-state

    Directory of Open Access Journals (Sweden)

    Thanh-Son Tran

    2012-05-01

    Full Text Available Abstract Background The serovars Enteritidis and Typhimurium of the Gram-negative bacterium Salmonella enterica are significant causes of human food poisoning. Fowl carrying these bacteria often show no clinical disease, with detection only established post-mortem. Increased resistance to the carrier state in commercial poultry could be a way to improve food safety by reducing the spread of these bacteria in poultry flocks. Previous studies identified QTLs for both resistance to carrier state and resistance to Salmonella colonization in the same White Leghorn inbred lines. Until now, none of the QTLs identified was common to the two types of resistance. All these analyses were performed using the F2 inbred or backcross option of the QTLExpress software based on linear regression. In the present study, QTL analysis was achieved using Maximum Likelihood with QTLMap software, in order to test the effect of the QTL analysis method on QTL detection. We analyzed the same phenotypic and genotypic data as those used in previous studies, which were collected on 378 animals genotyped with 480 genome-wide SNP markers. To enrich these data, we added eleven SNP markers located within QTLs controlling resistance to colonization and we looked for potential candidate genes co-localizing with QTLs. Results In our case the QTL analysis method had an important impact on QTL detection. We were able to identify new genomic regions controlling resistance to carrier-state, in particular by testing the existence of two segregating QTLs. But some of the previously identified QTLs were not confirmed. Interestingly, two QTLs were detected on chromosomes 2 and 3, close to the locations of the major QTLs controlling resistance to colonization and to candidate genes involved in the immune response identified in other, independent studies. Conclusions Due to the lack of stability of the QTLs detected, we suggest that interesting regions for further studies are those that were

  11. Search for Point Sources of Ultra-High Energy Cosmic Rays Above 40 EeV Using a Maximum Likelihood Ratio Test

    CERN Document Server

    Farrar, G R; Abu-Zayyad, T; Amann, J F; Archbold, G; Atkins, R; Bellido, J A; Belov, K; Belz, J W; Ben-Zvi, S Y; Bergman, D R; Boyer, J H; Burt, G W; Cao, Z; Clay, R W; Connolly, B M; Dawson, B R; Deng, W; Fedorova, Y; Findlay, J; Finley, C B; Hanlon, W F; Hoffman, C M; Holzscheiter, M H; Hughes, G A; Jui, C C H; Kim, K; Kirn, M A; Knapp, B C; Loh, E C; Maestas, M M; Manago, N; Mannel, E J; Marek, L J; Martens, K; Matthews, J A J; Matthews, J N; O'Neill, A; Painter, C A; Perera, L P; Reil, K; Riehle, R; Roberts, M D; Sasaki, M; Schnetzer, S R; Seman, M; Simpson, K M; Sinnis, G; Smith, J D; Snow, R; Sokolsky, P; Song, C; Springer, R W; Stokes, B T; Thomas, J R; Thomas, S B; Thomson, G B; Tupa, D; Westerhoff, S; Wiencke, L R; Zech, A

    2004-01-01

    We present the results of a search for cosmic ray point sources at energies above 40 EeV in the combined data sets recorded by the AGASA and HiRes stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.

  12. Telemetry degradation due to a CW RFI induced carrier tracking error for the block IV receiving system with maximum likelihood convolution decoding

    Science.gov (United States)

    Sue, M. K.

    1981-01-01

    Models to characterize the behavior of the Deep Space Network (DSN) Receiving System in the presence of a radio frequency interference (RFI) are considered. A simple method to evaluate the telemetry degradation due to the presence of a CW RFI near the carrier frequency for the DSN Block 4 Receiving System using the maximum likelihood convolutional decoding assembly is presented. Analytical and experimental results are given.

  13. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    Science.gov (United States)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  14. Approximate K-Nearest Neighbour Based Spatial Clustering Using K-D Tree

    Directory of Open Access Journals (Sweden)

    Mohammed Otair

    2013-03-01

    Full Text Available Different spatial objects that vary in their characteristics, such as molecular biology and geography, arepresented in spatial areas. Methods to organize, manage, and maintain those objects in a structuredmanner are required. Data mining raised different techniques to overcome these requirements. There aremany major tasks of data mining, but the mostly used task is clustering. Data set within the same clustershare common features that give each cluster its characteristics. In this paper, an implementation ofApproximate kNN-based spatial clustering algorithm using the K-d tree is proposed. The majorcontribution achieved by this research is the use of the k-d tree data structure for spatial clustering, andcomparing its performance to the brute-force approach. The results of the work performed in this paperrevealed better performance using the k-d tree, compared to the traditional brute-force approach.

  15. Beyond the locally tree-like approximation for percolation on real networks

    CERN Document Server

    Radicchi, Filippo

    2016-01-01

    Theoretical attempts proposed so far to describe ordinary percolation processes on real-world networks rely on the locally tree-like ansatz. Such an approximation, however, holds only to a limited extent, as real graphs are often characterized by high frequencies of short loops. We present here a theoretical framework able to overcome such a limitation for the case of site percolation. Our method is based on a message passing algorithm that discounts redundant paths along triangles in the graph. We systematically test the approach on 98 real-world graphs and on synthetic networks. We find excellent accuracy in the prediction of the whole percolation diagram, with significant improvement with respect to the prediction obtained under the locally tree-like approximation. Residual discrepancies between theory and simulations do not depend on clustering and can be attributed to the presence of loops longer than three edges. We present also a method to account for clustering in bond percolation, but the improvement...

  16. MIMO Detection for High-Order QAM Based on a Gaussian Tree Approximation

    OpenAIRE

    Goldberger, Jacobb; Leshem, Amir

    2010-01-01

    This paper proposes a new detection algorithm for MIMO communication systems employing high order QAM constellations. The factor graph that corresponds to this problem is very loopy; in fact, it is a complete graph. Hence, a straightforward application of the Belief Propagation (BP) algorithm yields very poor results. Our algorithm is based on an optimal tree approximation of the Gaussian density of the unconstrained linear system. The finite-set constraint is then applied to obtain a loop-fr...

  17. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    Science.gov (United States)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  18. A Superstabilizing $\\log(n)$-Approximation Algorithm for Dynamic Steiner Trees

    CERN Document Server

    Blin, Lélia; Rovedakis, Stephane

    2009-01-01

    In this paper we design and prove correct a fully dynamic distributed algorithm for maintaining an approximate Steiner tree that connects via a minimum-weight spanning tree a subset of nodes of a network (referred as Steiner members or Steiner group) . Steiner trees are good candidates to efficiently implement communication primitives such as publish/subscribe or multicast, essential building blocks for the new emergent networks (e.g. P2P, sensor or adhoc networks). The cost of the solution returned by our algorithm is at most $\\log |S|$ times the cost of an optimal solution, where $S$ is the group of members. Our algorithm improves over existing solutions in several ways. First, it tolerates the dynamism of both the group members and the network. Next, our algorithm is self-stabilizing, that is, it copes with nodes memory corruption. Last but not least, our algorithm is \\emph{superstabilizing}. That is, while converging to a correct configuration (i.e., a Steiner tree) after a modification of the network, it...

  19. APPROXIMATION OF VOLUME AND BRANCH SIZE DISTRIBUTION OF TREES FROM LASER SCANNER DATA

    Directory of Open Access Journals (Sweden)

    P. Raumonen

    2012-09-01

    Full Text Available This paper presents an approach for automatically approximating the above-ground volume and branch size distribution of trees from dense terrestrial laser scanner produced point clouds. The approach is based on the assumption that the point cloud is a sample of a surface in 3D space and the surface is locally like a cylinder. The point cloud is covered with small neighborhoods which conform to the surface. Then the neighborhoods are characterized geometrically and these characterizations are used to classify the points into trunk, branch, and other points. Finally, proper subsets are determined for cylinder fitting using geometric characterizations of the subsets.

  20. A conceptual approach to approximate tree root architecture in infinite slope models

    Science.gov (United States)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  1. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.;

    2010-01-01

    cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...... observed data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery...... Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission, with a...

  2. Maximum likelihood method analysis - A procedure to estimate the energy of cosmic rays muons from the observed muon interactions with an electromagnetic calorimeter

    International Nuclear Information System (INIS)

    An electromagnetic sampling calorimeter is under construction in IPNE Bucharest for the determination of the energy of cosmic ray muons in TeV range, consisting of lead (1 cm thick) absorber layer, alternating with scintillator (3 cm thick) layers. The possibility of the estimation of the energy of high-energy cosmic muons is scrutinized using simulations with GEANT code of the response of the detector (30 layers) to incident energies in the range 1-30 TeV. A Maximum Likelihood Method analysis is presented as a procedure to determine the muon energy, being applied to the detector response to the muons of discrete energy and to the muons distributed according to the cosmic ray spectrum. (author) 17 Figs., 2 Tabs., 15 Refs

  3. A user-operated audiometry method based on the maximum likelihood principle and the two-alternative forced-choice paradigm

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben;

    2014-01-01

    Objective: To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating...... response criteria. User-operated audiometry was developed as an alternative to traditional audiometry for research purposes among musicians. Design: Test-retest reliability of the user-operated audiometry system was evaluated and the user-operated audiometry system was compared with traditional audiometry....... Study sample: Test-retest reliability of user-operated 2AFC audiometry was tested with 38 naïve listeners. User-operated 2AFC audiometry was compared to traditional audiometry in 41 subjects. Results: The repeatability of user-operated 2AFC audiometry was comparable to traditional audiometry with...

  4. Maximum likelihood topographic map formation.

    Science.gov (United States)

    Van Hulle, Marc M

    2005-03-01

    We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence. PMID:15802004

  5. Letter: TreeAdder: a tool to assist the optimal positioning of a new leaf into an existing phylogenetic tree

    OpenAIRE

    Gatherer, D.

    2007-01-01

    TreeAdder is a computer application that adds a leaf in all possible positions on a phylogenetic tree. The resulting set of trees represent a dataset appropriate for maximum likelihood calculation of the optimal tree. TreeAdder therefore provides a utility for what was previously a tedious and error-prone process.

  6. A maximum likelihood approach to generate hypotheses on the evolution and historical biogeography in the Lower Volga Valley regions (southwest Russia).

    Science.gov (United States)

    Mavrodiev, Evgeny V; Laktionov, Alexy P; Cellinese, Nico

    2012-07-01

    The evolution of the diverse flora in the Lower Volga Valley (LVV) (southwest Russia) is complex due to the composite geomorphology and tectonic history of the Caspian Sea and adjacent areas. In the absence of phylogenetic studies and temporal information, we implemented a maximum likelihood (ML) approach and stochastic character mapping reconstruction aiming at recovering historical signals from species occurrence data. A taxon-area matrix of 13 floristic areas and 1018 extant species was constructed and analyzed with RAxML and Mesquite. Additionally, we simulated scenarios with numbers of hypothetical extinct taxa from an unknown palaeoflora that occupied the areas before the dramatic transgression and regression events that have occurred from the Pleistocene to the present day. The flora occurring strictly along the river valley and delta appear to be younger than that of adjacent steppes and desert-like regions, regardless of the chronology of transgression and regression events that led to the geomorphological formation of the LVV. This result is also supported when hypothetical extinct taxa are included in the analyses. The history of each species was inferred by using a stochastic character mapping reconstruction method as implemented in Mesquite. Individual histories appear to be independent from one another and have been shaped by repeated dispersal and extinction events. These reconstructions provide testable hypotheses for more in-depth investigations of their population structure and dynamics. PMID:22957179

  7. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    Science.gov (United States)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  8. In vivo thickness dynamics measurement of tear film lipid and aqueous layers with optical coherence tomography and maximum-likelihood estimation.

    Science.gov (United States)

    Huang, Jinxin; Hindman, Holly B; Rolland, Jannick P

    2016-05-01

    Dry eye disease (DED) is a common ophthalmic condition that is characterized by tear film instability and leads to ocular surface discomfort and visual disturbance. Advancements in the understanding and management of this condition have been limited by our ability to study the tear film secondary to its thin structure and dynamic nature. Here, we report a technique to simultaneously estimate the thickness of both the lipid and aqueous layers of the tear film in vivo using optical coherence tomography and maximum-likelihood estimation. After a blink, the lipid layer was rapidly thickened at an average rate of 10  nm/s over the first 2.5 s before stabilizing, whereas the aqueous layer continued thinning at an average rate of 0.29  μm/s of the 10 s blink cycle. Further development of this tear film imaging technique may allow for the elucidation of events that trigger tear film instability in DED. PMID:27128054

  9. A Study on Along-Track and Cross-Track Noise of Altimetry Data by Maximum Likelihood: Mars Orbiter Laser Altimetry (Mola) Example

    Science.gov (United States)

    Jarmołowski, Wojciech; Łukasiak, Jacek

    2015-12-01

    The work investigates the spatial correlation of the data collected along orbital tracks of Mars Orbiter Laser Altimeter (MOLA) with a special focus on the noise variance problem in the covariance matrix. The problem of different correlation parameters in along-track and crosstrack directions of orbital or profile data is still under discussion in relation to Least Squares Collocation (LSC). Different spacing in along-track and transverse directions and anisotropy problem are frequently considered in the context of this kind of data. Therefore the problem is analyzed in this work, using MOLA data samples. The analysis in this paper is focused on a priori errors that correspond to the white noise present in the data and is performed by maximum likelihood (ML) estimation in two, perpendicular directions. Additionally, correlation lengths of assumed planar covariance model are determined by ML and by fitting it into the empirical covariance function (ECF). All estimates considered together confirm substantial influence of different data resolution in along-track and transverse directions on the covariance parameters.

  10. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    Science.gov (United States)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  11. Accuracy of land use change detection using support vector machine and maximum likelihood techniques for open-cast coal mining areas.

    Science.gov (United States)

    Karan, Shivesh Kishore; Samadder, Sukha Ranjan

    2016-08-01

    One objective of the present study was to evaluate the performance of support vector machine (SVM)-based image classification technique with the maximum likelihood classification (MLC) technique for a rapidly changing landscape of an open-cast mine. The other objective was to assess the change in land use pattern due to coal mining from 2006 to 2016. Assessing the change in land use pattern accurately is important for the development and monitoring of coalfields in conjunction with sustainable development. For the present study, Landsat 5 Thematic Mapper (TM) data of 2006 and Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data of 2016 of a part of Jharia Coalfield, Dhanbad, India, were used. The SVM classification technique provided greater overall classification accuracy when compared to the MLC technique in classifying heterogeneous landscape with limited training dataset. SVM exceeded MLC in handling a difficult challenge of classifying features having near similar reflectance on the mean signature plot, an improvement of over 11 % was observed in classification of built-up area, and an improvement of 24 % was observed in classification of surface water using SVM; similarly, the SVM technique improved the overall land use classification accuracy by almost 6 and 3 % for Landsat 5 and Landsat 8 images, respectively. Results indicated that land degradation increased significantly from 2006 to 2016 in the study area. This study will help in quantifying the changes and can also serve as a basis for further decision support system studies aiding a variety of purposes such as planning and management of mines and environmental impact assessment. PMID:27461425

  12. Joint estimation of soil moisture profile and hydraulic parameters by ground-penetrating radar data assimilation with maximum likelihood ensemble filter

    Science.gov (United States)

    Tran, Anh Phuong; Vanclooster, Marnik; Zupanski, Milija; Lambot, Sébastien

    2014-04-01

    Ground-Penetrating Radar (GPR) has recently become a powerful geophysical technique to characterize soil moisture at the field scale. We developed a data assimilation scheme to simultaneously estimate the vertical soil moisture profile and hydraulic parameters from time-lapse GPR measurements. The assimilation scheme includes a soil hydrodynamic model to simulate the soil moisture dynamics, a full-wave electromagnetic wave propagation model, and petrophysical relationship to link the state variable with the GPR data and a maximum likelihood ensemble assimilation algorithm. The hydraulic parameters are estimated jointly with the soil moisture using a state augmentation technique. The approach allows for the direct assimilation of GPR data, thus maximizing the use of the information. The proposed approach was validated by numerical experiments assuming wrong initial conditions and hydraulic parameters. The synthetic soil moisture profiles were generated by the Hydrus-1D model, which then were used by the electromagnetic model and petrophysical relationship to create "observed" GPR data. The results show that the data assimilation significantly improves the accuracy of the hydrodynamic model prediction. Compared with the surface soil moisture assimilation, the GPR data assimilation better estimates the soil moisture profile and hydraulic parameters. The results also show that the estimated soil moisture profile in the loamy sand and silt soils converge to the "true" state more rapidly than in the clay one. Of the three unknown parameters of the Mualem-van Genuchten model, the estimation of n is more accurate than that of α and Ks. The approach shows a great promise to use GPR measurements for the soil moisture profile and hydraulic parameter estimation at the field scale.

  13. Maximum Likelihood Factor Analysis of the Effects of Chronic Centrifugation on the Structural Development of the Musculoskeletal System of the Rat

    Science.gov (United States)

    Amtmann, E.; Kimura, T.; Oyama, J.; Doden, E.; Potulski, M.

    1979-01-01

    At the age of 30 days female Sprague-Dawley rats were placed on a 3.66 m radius centrifuge and subsequently exposed almost continuously for 810 days to either 2.76 or 4.15 G. An age-matched control group of rats was raised near the centrifuge facility at earth gravity. Three further control groups of rats were obtained from the animal colony and sacrificed at the age of 34, 72 and 102 days. A total of 16 variables were simultaneously factor analyzed by maximum-likelihood extraction routine and the factor loadings presented after-rotation to simple structure by a varimax rotation routine. The variables include the G-load, age, body mass, femoral length and cross-sectional area, inner and outer radii, density and strength at the mid-length of the femur, dry weight of gluteus medius, semimenbranosus and triceps surae muscles. Factor analyses on A) all controls, B) all controls and the 2.76 G group, and C) all controls and centrifuged animals, produced highly similar loading structures of three common factors which accounted for 74%, 68% and 68%. respectively, of the total variance. The 3 factors were interpreted as: 1. An age and size factor which stimulates the growth in length and diameter and increases the density and strength of the femur. This factor is positively correlated with G-load but is also active in the control animals living at earth gravity. 2. A growth inhibition factor which acts on body size, femoral length and on both the outer and inner radius at mid-length of the femur. This factor is intensified by centrifugation.

  14. Rooted Tree Analysis for Order Conditions of Stochastic Runge-Kutta Methods for the Weak Approximation of Stochastic Differential Equations

    OpenAIRE

    Rößler, Andreas

    2013-01-01

    A general class of stochastic Runge-Kutta methods for the weak approximation of It\\^o and Stratonovich stochastic differential equations with a multi-dimensional Wiener process is introduced. Colored rooted trees are used to derive an expansion of the solution process and of the approximation process calculated with the stochastic Runge-Kutta method. A theorem on general order conditions for the coefficients and the random variables of the stochastic Runge-Kutta method is proved by rooted tre...

  15. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  16. DendroBlast: approximate phylogenetic trees in the absence of multiple sequence alignments

    OpenAIRE

    KELLY S; Maini, P. K.

    2013-01-01

    The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realis...

  17. Predicting Porosity and Permeability for the Canyon Formation, SACROC Unit (Kelly-Snyder Field), Using the Geologic Analysis via Maximum Likelihood System

    International Nuclear Information System (INIS)

    , with high vertical resolution, could be generated for many wells. This procedure permits to populate any well location with core-scale estimates of P and P and rock types facilitating the application of geostatistical characterization methods. The first step procedure was to discriminate rock types of similar depositional environment and/or reservoir quality (RQ) using a specific clustering technique. The approach implemented utilized a model-based, probabilistic clustering analysis procedure called GAMLS1,2,3,4 (Geologic Analysis via Maximum Likelihood System) which is based on maximum likelihood principles. During clustering, samples (data at each digitized depth from each well) are probabilistically assigned to a previously specified number of clusters with a fractional probability that varies between zero and one

  18. Compressive Imaging using Approximate Message Passing and a Markov-Tree Prior

    OpenAIRE

    Som, Subhojit; Schniter, Philip

    2011-01-01

    We propose a novel algorithm for compressive imaging that exploits both the sparsity and persistence across scales found in the 2D wavelet transform coefficients of natural images. Like other recent works, we model wavelet structure using a hidden Markov tree (HMT) but, unlike other works, ours is based on loopy belief propagation (LBP). For LBP, we adopt a recently proposed "turbo" message passing schedule that alternates between exploitation of HMT structure and exploitation of compressive-...

  19. Aplikasi Analisis Faktor Dengan Metode Principal Component Analysis Dan Maximum Likelihood Dalam Faktor-faktor Yang Memengaruhi Pemberian Makanan Tambahan Pada Bayi Usia 0-6 Bulan Di Desa Pematang Panjang Kecamatan Air Putih Kabupaten Batubara Tahun 2013

    OpenAIRE

    Simarmata, Iska

    2014-01-01

    Factor analysis is one of the multivariate statistical analysis techniques.This analysis is included in the interdependence technique with the aim of reconciling data in a grouping or the formation of a new set of variableswhich is named factor. The parameter estimation that is commonly used in this analysis is the principal component analysis method and the maximum likelihood method. This research aims to know the comparison of suitability of the model by principal component method and ma...

  20. The PhyloFacts FAT-CAT web server: ortholog identification and function prediction using fast approximate tree classification.

    Science.gov (United States)

    Afrasiabi, Cyrus; Samad, Bushra; Dineen, David; Meacham, Christopher; Sjölander, Kimmen

    2013-07-01

    The PhyloFacts 'Fast Approximate Tree Classification' (FAT-CAT) web server provides a novel approach to ortholog identification using subtree hidden Markov model-based placement of protein sequences to phylogenomic orthology groups in the PhyloFacts database. Results on a data set of microbial, plant and animal proteins demonstrate FAT-CAT's high precision at separating orthologs and paralogs and robustness to promiscuous domains. We also present results documenting the precision of ortholog identification based on subtree hidden Markov model scoring. The FAT-CAT phylogenetic placement is used to derive a functional annotation for the query, including confidence scores and drill-down capabilities. PhyloFacts' broad taxonomic and functional coverage, with >7.3 M proteins from across the Tree of Life, enables FAT-CAT to predict orthologs and assign function for most sequence inputs. Four pipeline parameter presets are provided to handle different sequence types, including partial sequences and proteins containing promiscuous domains; users can also modify individual parameters. PhyloFacts trees matching the query can be viewed interactively online using the PhyloScope Javascript tree viewer and are hyperlinked to various external databases. The FAT-CAT web server is available at http://phylogenomics.berkeley.edu/phylofacts/fatcat/. PMID:23685612

  1. A Comparison Between the Empirical Logistic Regression Method and the Maximum Likelihood Estimation Method%经验 logistic 回归方法与最大似然估计方法的对比分析

    Institute of Scientific and Technical Information of China (English)

    张婷婷; 高金玲

    2014-01-01

    针对logistic回归中最大似然估计法的迭代算法求解困难的问题,从理论和实例运用的两个角度寻找到一种简便估计法,即经验logistic回归。分析结果表明,在样本容量很大的情况下经验logistic回归方法比最大似然估计方法更具备良好的科学性和实用性,并且两种方法对同一组资料的分析结果一致,而经验logistic回归更简单,此结果对于实际工作者来说非常重要。%In this paper , the empirical logistic regression method and the maximum likelihood estimation method were analyzed in detail by illustrating in theory , and the two methods were compared with correlation a-nalysis from scientific and practical .Analysis results show that , under the condition of the sample size is very big , empirical logistic regression method is better than maximum likelihood estimation method in respect of scientific and practical , at the same time , they are the same consequence .However , empirical logistic regression method is easier than maximum likelihood estimation method , which is very important to practical workers .

  2. Approximate group context tree: applications to dynamic programming and dynamic choice models

    CERN Document Server

    Belloni, Alexandre

    2011-01-01

    The paper considers a variable length Markov chain model associated with a group of stationary processes that share the same context tree but potentially different conditional probabilities. We propose a new model selection and estimation method, develop oracle inequalities and model selection properties for the estimator. These results also provide conditions under which the use of the group structure can lead to improvements in the overall estimation. Our work is also motivated by two methodological applications: discrete stochastic dynamic programming and dynamic discrete choice models. We analyze the uniform estimation of the value function for dynamic programming and the uniform estimation of average dynamic marginal effects for dynamic discrete choice models accounting for possible imperfect model selection. We also derive the typical behavior of our estimator when applied to polynomially $\\beta$-mixing stochastic processes. For parametric models, we derive uniform rate of convergence for the estimation...

  3. Maximum Likelihood Decoding of Fountain Codes in Underwater Acoustic Communication%一种用于水声通信的喷泉码最大似然译码方法

    Institute of Scientific and Technical Information of China (English)

    武岩波; 朱敏

    2016-01-01

    Considering the characteristics of underwater acoustic communication, random linear fountain codes with maximum likelihood decoding are studied to correct erasure errors in the short packet transmission. In existing maximum likelihood decoding methods, processing begins when all the necessary blocks are available, resulting to the unacceptable decoding delay. An increment Gaussian elimination method is proposed to decrease the decoding delay by utilizing the time-slots of every block. The computation complexity is analyzed based on the principle of the probability distribution of the summation of binary random variables. The real-time ability of the proposed method is verified on the low-cost DSP chip for the underwater acoustic modem. The method is applicable to underwater transmissions of images, and sense data.%针对水声通信特点,研究随机线性喷泉码及最大似然译码,在分块数较小的包传输中纠正删除错误。传统的最大似然译码为整包统一处理,译码延迟大。该文提出一种逐行累增的高斯消去方法,将译码过程划分到各块到达时隙中执行,利用二进制分布求和的概率公式对单块到达所需计算量进行分析。在实际水声通信处理平台上进行了验证,满足实时计算需求,可用于水下图像、传感器数据等的传输。

  4. Maximum Likelihood Learning of Conditional MTE Distributions

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael; Salmerón, Antonio

    2009-01-01

    We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE specifications and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables...

  5. Maximum-likelihood algorithm for quantum tomography

    International Nuclear Information System (INIS)

    Optical homodyne tomography is discussed in the context of classical image processing. Analogies between these two fields are traced and used to formulate an iterative numerical algorithm for reconstructing the Wigner function from homodyne statistics. (Author)

  6. Trees

    CERN Document Server

    Epstein, Henri

    2016-01-01

    An algebraic formalism, developped with V.~Glaser and R.~Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large category of space-times.

  7. 广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models

    Institute of Scientific and Technical Information of China (English)

    张戈; 吴黎军

    2013-01-01

    研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.

  8. The RAKDB-Tree An Efifcient Approximation-Based High-Dimensional Index Structure%RAKDB-Tree--一种基于近似区域的多维数据索引结构

    Institute of Scientific and Technical Information of China (English)

    黄维辉; 熊翱

    2013-01-01

    For many application areas, the efifciency of multidimensional data processing is a key factor affecting their development. In particular, similarity search is used in many ifelds, such as data mining, big data analysis, digital multimedia etc. However, lots of index structures cannot avoid the“dimensionality curse”, when number of dimensions is very large. RAKDB-Tree uses partitioning method to divide data regions and create approximation ifles. Then RAKDB-Tree indexes the approximations with the improved method of KDB-Tree. RAKDB-Tree is an automatically adjust and optimize tree index structure. Experimental results show that the RAKDB-Tree has a promising improvement in performance.%多维数据的处理已经成为影响很多领域发展的关键因素,特别是多维数据的相似性查询已经被用在很多领域中。当数据维度很大的时候,大多数索引结构处理的性能下降,这现象被称为“维度灾难”。针对多维度灾难,RAKDB-Tree是本文提出的一种高效处理多维数据的索引结构。该索引结构首先把数据空间划分为子空间,然后使用改进的KDB-Tree对子空间建立索引。RAKDB-Tree的查询、插入、删除等算法使得,索引结构一直保持较优状态。实验结果表明,RAKDB-Tree能够很好解决因为数据维度增加而带来的各种问题。

  9. Maximum Likelihood Estimation Based Algorithm for Tracking Cooperative Target%一种基于最大似然估计的合作目标多维参数跟踪算法

    Institute of Scientific and Technical Information of China (English)

    魏子翔; 崔嵬; 李霖; 吴爽; 吴嗣亮

    2015-01-01

    The scheme which is based on the Digital Delay Locked Loop (DDLL), Frequency Locked Loop (FLL), and Phase Locked Loop (PLL) is implemented in the microwave radar for spatial rendezvous and docking, and the delay, frequency and Direction Of Arrival (DOA) estimations of the incident direct-sequence spread spectrum signal transmitted by cooperative target are obtained. Yet the DDLL, FLL, and PLL (DFP) based scheme has not made full use of the received signal. For this reason, a novel Maximum Likelihood Estimation (MLE) Based Tracking (MLBT) algorithm with a low computational burden is proposed. The feature that the gradients of cost function are proportional to parameter errors is employed to design discriminators of parameter errors. Then three tracking loops are set up to provide the parameter estimations. In the following section, the variance characteristics of discriminators are investigated, and the low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are given for the MLBT algorithm. Finally, the simulations and computational efficiency analysis are provided. The low bounds of Root Mean Square Errors (RMSEs) of parameter estimations are verified. Additionally, it is also shown that the MLBT algorithm achieves better performances in terms of estimators accuracy than those of the DFP based scheme with a limited increase in computational burden.%空间交会对接微波雷达采用基于延迟锁定环(DDLL)、锁频环(FLL)和锁相环(PLL)的算法处理合作目标转发的直接序列扩频信号,获得入射信号的时延、频率及波达角(DOA)估计.针对当前基于DDLL, FLL和PLL(DFP)的算法没有充分利用接收信号有效信息的问题,该文提出一种基于极大似然估计(MLE)的低代价闭环跟踪(MLBT)算法.该算法利用代价函数的梯度正比于参数误差的特性,设计了参数误差鉴别器.在此基础上给出了相应的扩频信号多参数跟踪环路.分析并验证了鉴别器的方差特性,

  10. Divergence date estimation and a comprehensive molecular tree of extant cetaceans.

    Science.gov (United States)

    McGowen, Michael R; Spaulding, Michelle; Gatesy, John

    2009-12-01

    Cetaceans are remarkable among mammals for their numerous adaptations to an entirely aquatic existence, yet many aspects of their phylogeny remain unresolved. Here we merged 37 new sequences from the nuclear genes RAG1 and PRM1 with most published molecular data for the group (45 nuclear loci, transposons, mitochondrial genomes), and generated a supermatrix consisting of 42,335 characters. The great majority of these data have never been combined. Model-based analyses of the supermatrix produced a solid, consistent phylogenetic hypothesis for 87 cetacean species. Bayesian analyses corroborated odontocete (toothed whale) monophyly, stabilized basal odontocete relationships, and completely resolved branching events within Mysticeti (baleen whales) as well as the problematic speciose clade Delphinidae (oceanic dolphins). Only limited conflicts relative to maximum likelihood results were recorded, and discrepancies found in parsimony trees were very weakly supported. We utilized the Bayesian supermatrix tree to estimate divergence dates among lineages using relaxed-clock methods. Divergence estimates revealed rapid branching of basal odontocete lineages near the Eocene-Oligocene boundary, the antiquity of river dolphin lineages, a Late Miocene radiation of balaenopteroid mysticetes, and a recent rapid radiation of Delphinidae beginning approximately 10 million years ago. Our comprehensive, time-calibrated tree provides a powerful evolutionary tool for broad-scale comparative studies of Cetacea. PMID:19699809

  11. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  12. Using projections and correlations to approximate probability distributions

    CERN Document Server

    Karlen, D A

    1998-01-01

    A method to approximate continuous multi-dimensional probability density functions (PDFs) using their projections and correlations is described. The method is particularly useful for event classification when estimates of systematic uncertainties are required and for the application of an unbinned maximum likelihood analysis when an analytic model is not available. A simple goodness of fit test of the approximation can be used, and simulated event samples that follow the approximate PDFs can be efficiently generated. The source code for a FORTRAN-77 implementation of this method is available.

  13. Decision tree approach for classification of remotely sensed satellite data using open source support

    Indian Academy of Sciences (India)

    Richa Sharma; Aniruddha Ghosh; P K Joshi

    2013-10-01

    In this study, an attempt has been made to develop a decision tree classification (DTC) algorithm for classification of remotely sensed satellite data (Landsat TM) using open source support. The decision tree is constructed by recursively partitioning the spectral distribution of the training dataset using WEKA, open source data mining software. The classified image is compared with the image classified using classical ISODATA clustering and Maximum Likelihood Classifier (MLC) algorithms. Classification result based on DTC method provided better visual depiction than results produced by ISODATA clustering or by MLC algorithms. The overall accuracy was found to be 90% (kappa = 0.88) using the DTC, 76.67% (kappa = 0.72) using the Maximum Likelihood and 57.5% (kappa = 0.49) using ISODATA clustering method. Based on the overall accuracy and kappa statistics, DTC was found to be more preferred classification approach than others.

  14. Maximum likelihood, least squares and penalized least squares for PET

    International Nuclear Information System (INIS)

    The EM algorithm is the basic approach used to maximize the log likelihood objective function for the reconstruction problem in PET. The EM algorithm is a scaled steepest ascent algorithm that elegantly handles the nonnegativity constraints of the problem. The authors show that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach. The experiments suggest that one can cut the computation by about a factor of 3 by using this technique. The results also apply to various penalized least squares functions which might be used to produce a smoother image

  15. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    OpenAIRE

    S. Sridevi; Nirmala, S.

    2013-01-01

    Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed fil...

  16. Maximum likelihood positioning and energy correction for scintillation detectors

    Science.gov (United States)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  17. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    Science.gov (United States)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  18. Algorithms, data structures, and numerics for likelihood-based phylogenetic inference of huge trees

    Directory of Open Access Journals (Sweden)

    Izquierdo-Carrasco Fernando

    2011-12-01

    Full Text Available Abstract Background The rapid accumulation of molecular sequence data, driven by novel wet-lab sequencing technologies, poses new challenges for large-scale maximum likelihood-based phylogenetic analyses on trees with more than 30,000 taxa and several genes. The three main computational challenges are: numerical stability, the scalability of search algorithms, and the high memory requirements for computing the likelihood. Results We introduce methods for solving these three key problems and provide respective proof-of-concept implementations in RAxML. The mechanisms presented here are not RAxML-specific and can thus be applied to any likelihood-based (Bayesian or maximum likelihood tree inference program. We develop a new search strategy that can reduce the time required for tree inferences by more than 50% while yielding equally good trees (in the statistical sense for well-chosen starting trees. We present an adaptation of the Subtree Equality Vector technique for phylogenomic datasets with missing data (already available in RAxML v728 that can reduce execution times and memory requirements by up to 50%. Finally, we discuss issues pertaining to the numerical stability of the Γ model of rate heterogeneity on very large trees and argue in favor of rate heterogeneity models that use a single rate or rate category for each site to resolve these problems. Conclusions We address three major issues pertaining to large scale tree reconstruction under maximum likelihood and propose respective solutions. Respective proof-of-concept/production-level implementations of our ideas are made available as open-source code.

  19. Modeling disease vector occurrence when detection is imperfect: infestation of Amazonian palm trees by triatomine bugs at three spatial scales.

    Directory of Open Access Journals (Sweden)

    Fernando Abad-Franch

    Full Text Available BACKGROUND: Failure to detect a disease agent or vector where it actually occurs constitutes a serious drawback in epidemiology. In the pervasive situation where no sampling technique is perfect, the explicit analytical treatment of detection failure becomes a key step in the estimation of epidemiological parameters. We illustrate this approach with a study of Attalea palm tree infestation by Rhodnius spp. (Triatominae, the most important vectors of Chagas disease (CD in northern South America. METHODOLOGY/PRINCIPAL FINDINGS: The probability of detecting triatomines in infested palms is estimated by repeatedly sampling each palm. This knowledge is used to derive an unbiased estimate of the biologically relevant probability of palm infestation. We combine maximum-likelihood analysis and information-theoretic model selection to test the relationships between environmental covariates and infestation of 298 Amazonian palm trees over three spatial scales: region within Amazonia, landscape, and individual palm. Palm infestation estimates are high (40-60% across regions, and well above the observed infestation rate (24%. Detection probability is higher ( approximately 0.55 on average in the richest-soil region than elsewhere ( approximately 0.08. Infestation estimates are similar in forest and rural areas, but lower in urban landscapes. Finally, individual palm covariates (accumulated organic matter and stem height explain most of infestation rate variation. CONCLUSIONS/SIGNIFICANCE: Individual palm attributes appear as key drivers of infestation, suggesting that CD surveillance must incorporate local-scale knowledge and that peridomestic palm tree management might help lower transmission risk. Vector populations are probably denser in rich-soil sub-regions, where CD prevalence tends to be higher; this suggests a target for research on broad-scale risk mapping. Landscape-scale effects indicate that palm triatomine populations can endure deforestation

  20. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  1. Tree sets

    OpenAIRE

    Diestel, Reinhard

    2015-01-01

    We study an abstract notion of tree structure which generalizes tree-decompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked to graph-theoretical trees, these `tree sets' can provide a suitable formalization of tree structure also for infinite graphs, matroids, or set partitions, as well as for other discrete structures, such as order trees. In this first of two papers we introduce tree sets, establish their relation to graph and order trees, and show h...

  2. Evaluating Summary Methods for Multilocus Species Tree Estimation in the Presence of Incomplete Lineage Sorting.

    Science.gov (United States)

    Mirarab, Siavash; Bayzid, Md Shamsuzzoha; Warnow, Tandy

    2016-05-01

    Species tree estimation is complicated by processes, such as gene duplication and loss and incomplete lineage sorting (ILS), that cause discordance between gene trees and the species tree. Furthermore, while concatenation, a traditional approach to tree estimation, has excellent performance under many conditions, the expectation is that the best accuracy will be obtained through the use of species tree estimation methods that are specifically designed to address gene tree discordance. In this article, we report on a study to evaluate MP-EST-one of the most popular species tree estimation methods designed to address ILS-as well as concatenation under maximum likelihood, the greedy consensus, and two supertree methods (Matrix Representation with Parsimony and Matrix Representation with Likelihood). Our study shows that several factors impact the absolute and relative accuracy of methods, including the number of gene trees, the accuracy of the estimated gene trees, and the amount of ILS. Concatenation can be more accurate than the best summary methods in some cases (mostly when the gene trees have poor phylogenetic signal or when the level of ILS is low), but summary methods are generally more accurate than concatenation when there are an adequate number of sufficiently accurate gene trees. Our study suggests that coalescent-based species tree methods may be key to estimating highly accurate species trees from multiple loci. PMID:25164915

  3. Cover Tree Bayesian Reinforcement Learning

    OpenAIRE

    Tziortziotis, Nikolaos; Dimitrakakis, Christos; Blekas, Konstantinos

    2013-01-01

    This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration po...

  4. Diophantine approximations

    CERN Document Server

    Niven, Ivan

    2008-01-01

    This self-contained treatment originated as a series of lectures delivered to the Mathematical Association of America. It covers basic results on homogeneous approximation of real numbers; the analogue for complex numbers; basic results for nonhomogeneous approximation in the real case; the analogue for complex numbers; and fundamental properties of the multiples of an irrational number, for both the fractional and integral parts.The author refrains from the use of continuous fractions and includes basic results in the complex case, a feature often neglected in favor of the real number discuss

  5. Planting Trees

    OpenAIRE

    Relf, Diane

    2009-01-01

    The key aspects in planning a tree planting are determining the function of the tree, the site conditions, that the tree is suited to site conditions and space, and if you are better served by a container-grown. After the tree is planted according to the prescribed steps, you must irrigate as needed and mulch the root zone area.

  6. Approximate bayesian parameter inference for dynamical systems in systems biology

    International Nuclear Information System (INIS)

    This paper proposes to use approximate instead of exact stochastic simulation algorithms for approximate Bayesian parameter inference of dynamical systems in systems biology. It first presents the mathematical framework for the description of systems biology models, especially from the aspect of a stochastic formulation as opposed to deterministic model formulations based on the law of mass action. In contrast to maximum likelihood methods for parameter inference, approximate inference method- share presented which are based on sampling parameters from a known prior probability distribution, which gradually evolves toward a posterior distribution, through the comparison of simulated data from the model to a given data set of measurements. The paper then discusses the simulation process, where an over- view is given of the different exact and approximate methods for stochastic simulation and their improvements that we propose. The exact and approximate simulators are implemented and used within approximate Bayesian parameter inference methods. Our evaluation of these methods on two tasks of parameter estimation in two different models shows that equally good results are obtained much faster when using approximate simulation as compared to using exact simulation. (Author)

  7. NML Computation Algorithms for Tree-Structured Multinomial Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Kontkanen Petri

    2007-01-01

    Full Text Available Typical problems in bioinformatics involve large discrete datasets. Therefore, in order to apply statistical methods in such domains, it is important to develop efficient algorithms suitable for discrete data. The minimum description length (MDL principle is a theoretically well-founded, general framework for performing statistical inference. The mathematical formalization of MDL is based on the normalized maximum likelihood (NML distribution, which has several desirable theoretical properties. In the case of discrete data, straightforward computation of the NML distribution requires exponential time with respect to the sample size, since the definition involves a sum over all the possible data samples of a fixed size. In this paper, we first review some existing algorithms for efficient NML computation in the case of multinomial and naive Bayes model families. Then we proceed by extending these algorithms to more complex, tree-structured Bayesian networks.

  8. Approximate Representations and Approximate Homomorphisms

    OpenAIRE

    Moore, Cristopher; Russell, Alexander

    2010-01-01

    Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities i...

  9. Accuracy of the Bethe approximation for hyperparameter estimation in probabilistic image processing

    International Nuclear Information System (INIS)

    We investigate the accuracy of statistical-mechanical approximations for the estimation of hyperparameters from observable data in probabilistic image processing, which is based on Bayesian statistics and maximum likelihood estimation. Hyperparameters in statistical science correspond to interactions or external fields in the statistical-mechanics context. In this paper, hyperparameters in the probabilistic model are determined so as to maximize a marginal likelihood. A practical algorithm is described for grey-level image restoration based on a Gaussian graphical model and the Bethe approximation. The algorithm corresponds to loopy belief propagation in artificial intelligence. We examine the accuracy of hyperparameter estimation when we use the Bethe approximation. It is well known that a practical algorithm for probabilistic image processing can be prescribed analytically when a Gaussian graphical model is adopted as a prior probabilistic model in Bayes' formula. We are therefore able to compare, in a numerical study, results obtained through mean-field-type approximations with those based on exact calculation

  10. Approximate Matching of Hierarchial Data

    DEFF Research Database (Denmark)

    Augsten, Nikolaus

    formally proof that the pq-gram index can be incrementally updated based on the log of edit operations without reconstructing intermediate tree versions. The incremental update is independent of the data size and scales to a large number of changes in the data. We introduce windowed pq-grams for the......-gram based distance between streets, introduces a global greedy matching that guarantees stable pairs, and links addresses that are stored with different granularity. The connector has been successfully tested with public administration databases. Our extensive experiments on both synthetic and real world......The goal of this thesis is to design, develop, and evaluate new methods for the approximate matching of hierarchical data represented as labeled trees. In approximate matching scenarios two items should be matched if they are similar. Computing the similarity between labeled trees is hard as in...

  11. Autoencoder Trees

    OpenAIRE

    İrsoy, Ozan; Alpaydın, Ethem

    2014-01-01

    We discuss an autoencoder model in which the encoding and decoding functions are implemented by decision trees. We use the soft decision tree where internal nodes realize soft multivariate splits given by a gating function and the overall output is the average of all leaves weighted by the gating values on their path. The encoder tree takes the input and generates a lower dimensional representation in the leaves and the decoder tree takes this and reconstructs the original input. Exploiting t...

  12. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  13. Diophantine approximation

    CERN Document Server

    Schmidt, Wolfgang M

    1980-01-01

    "In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)

  14. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    2013-01-01

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  15. A best-first tree-searching approach for ML decoding in MIMO system

    KAUST Repository

    Shen, Chung-An

    2012-07-28

    In MIMO communication systems maximum-likelihood (ML) decoding can be formulated as a tree-searching problem. This paper presents a tree-searching approach that combines the features of classical depth-first and breadth-first approaches to achieve close to ML performance while minimizing the number of visited nodes. A detailed outline of the algorithm is given, including the required storage. The effects of storage size on BER performance and complexity in terms of search space are also studied. Our result demonstrates that with a proper choice of storage size the proposed method visits 40% fewer nodes than a sphere decoding algorithm at signal to noise ratio (SNR) = 20dB and by an order of magnitude at 0 dB SNR.

  16. Identification and Mapping of Tree Species in Urban Areas Using WORLDVIEW-2 Imagery

    Science.gov (United States)

    Mustafa, Y. T.; Habeeb, H. N.; Stein, A.; Sulaiman, F. Y.

    2015-10-01

    Monitoring and mapping of urban trees are essential to provide urban forestry authorities with timely and consistent information. Modern techniques increasingly facilitate these tasks, but require the development of semi-automatic tree detection and classification methods. In this article, we propose an approach to delineate and map the crown of 15 tree species in the city of Duhok, Kurdistan Region of Iraq using WorldView-2 (WV-2) imagery. A tree crown object is identified first and is subsequently delineated as an image object (IO) using vegetation indices and texture measurements. Next, three classification methods: Maximum Likelihood, Neural Network, and Support Vector Machine were used to classify IOs using selected IO features. The best results are obtained with Support Vector Machine classification that gives the best map of urban tree species in Duhok. The overall accuracy was between 60.93% to 88.92% and κ-coefficient was between 0.57 to 0.75. We conclude that fifteen tree species were identified and mapped at a satisfactory accuracy in urban areas of this study.

  17. Species tree estimation for the late blight pathogen, Phytophthora infestans, and close relatives.

    Directory of Open Access Journals (Sweden)

    Jaime E Blair

    Full Text Available To better understand the evolutionary history of a group of organisms, an accurate estimate of the species phylogeny must be known. Traditionally, gene trees have served as a proxy for the species tree, although it was acknowledged early on that these trees represented different evolutionary processes. Discordances among gene trees and between the gene trees and the species tree are also expected in closely related species that have rapidly diverged, due to processes such as the incomplete sorting of ancestral polymorphisms. Recently, methods have been developed for the explicit estimation of species trees, using information from multilocus gene trees while accommodating heterogeneity among them. Here we have used three distinct approaches to estimate the species tree for five Phytophthora pathogens, including P. infestans, the causal agent of late blight disease in potato and tomato. Our concatenation-based "supergene" approach was unable to resolve relationships even with data from both the nuclear and mitochondrial genomes, and from multiple isolates per species. Our multispecies coalescent approach using both Bayesian and maximum likelihood methods was able to estimate a moderately supported species tree showing a close relationship among P. infestans, P. andina, and P. ipomoeae. The topology of the species tree was also identical to the dominant phylogenetic history estimated in our third approach, Bayesian concordance analysis. Our results support previous suggestions that P. andina is a hybrid species, with P. infestans representing one parental lineage. The other parental lineage is not known, but represents an independent evolutionary lineage more closely related to P. ipomoeae. While all five species likely originated in the New World, further study is needed to determine when and under what conditions this hybridization event may have occurred.

  18. Holy Trees

    OpenAIRE

    Elosua, Miguel

    2013-01-01

    Puxi's streets are lined with plane trees, especially in the former French Concession (and particularly in the Luwan and Xuhui districts). There are a few different varieties of plane tree, but the one found in Shanghai, is the hybrid platane hispanica. In China they are called French Plane trees (faguo wutong - 法国梧桐), for they were first planted along the Avenue Joffre (now Huai Hai lu - 淮海路) in 1902 by the French. Their life span is long, over a thousand years, and they may grow as high as ...

  19. Electron Tree

    DEFF Research Database (Denmark)

    Appelt, Ane L; Rønde, Heidi S

    2013-01-01

    The photo shows a close-up of a Lichtenberg figure – popularly called an “electron tree” – produced in a cylinder of polymethyl methacrylate (PMMA). Electron trees are created by irradiating a suitable insulating material, in this case PMMA, with an intense high energy electron beam. Upon discharge......, during dielectric breakdown in the material, the electrons generate branching chains of fractures on leaving the PMMA, producing the tree pattern seen. To be able to create electron trees with a clinical linear accelerator, one needs to access the primary electron beam used for photon treatments. We...... appropriated a linac that was being decommissioned in our department and dismantled the head to circumvent the target and ion chambers. This is one of 24 electron trees produced before we had to stop the fun and allow the rest of the accelerator to be disassembled....

  20. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    Science.gov (United States)

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148

  1. Maximum likelihood fit of hadronic background in the 1982 BEBC beam dump experiment

    International Nuclear Information System (INIS)

    The 1982 CERN beam dump experiment WA66 used the Big European Bubble Chamber (BEBC) to detect interaction by neutrinos produced in a copper target 406 m upstream of the chamber. BEBC was filler with a Ne/H2 mixture with a density of 0.69 g/cm3. Events were accepted inside a fiducial volume of 16.6 m3 with a maximum depth in the beam direction of 3.1 m. The neutrino reactions are either of the charged current type (CC) with a charged lepton (e or μ) in the final state or else they are neutral currents (NC) without any observed lepton. Because of neutrino interactions in the material immediately upstream of the chamber there are also neutral hadrons entering the chamber. Some of these react in the chamber and constitute a background in the NC sample. One possible way of determining the hadron contamination is to look at the distance that the particles travel through the fiducial volume before interacting. The distribution in this variable (x) will be different for hadrons and neutrinos because of the short (ca 200 cm) interaction length of the hadrons. Making fits to the x spectra for different energy intervals one suffers from poor statistics and the energy spectrum for the hadron component obtained is unreasonable. This report describes an attempt to remedy this by including the observed energy spectrum for secondaries from neutrino interactions (associated Nsup(+):s) in BEBC and making one fit for all energies at once. The method described gives estimates for the neutrino signal in the NC sample that appear reasonable, and with errors that agree with the Poisson error for high energies where the background is small. Comparing the fit to data with fits to Monte Carlo samples generated using the fitted model one obtains a goodness-of-fit of 0.45. The Monte-Carlo fit results for the neutrino signal compared with the 'true' signal generated show that there is no appreciable systematic shift, and that the errors are correctly determined. It seems reasonable to use the results from the fit when analysing the experiment. The correction to the raw NC sample is (2+-4)% above 10 GeV hadronic energy. With 3 refs and 7 figures. (Author)

  2. Adaptive wave filtering for dynamic positioning of marine vessels using maximum likelihood identification: Theory and experiments

    Digital Repository Service at National Institute of Oceanography (India)

    Hassani, V.; Sorensen, A.J.; Pascoal, A.M.

    This paper addresses a filtering problem that arises in the design of dynamic positioning systems for ships and offshore rigs subjected to the influence of sea waves. The dynamic model of the vessel captures explicitly the sea state as an uncertain...

  3. Full-information Maximum Likelihood Estimation of Brand Positioning Maps Using Supermarkt Scanning Data

    NARCIS (Netherlands)

    E. Waarts (Eric); M.A. Carree (Martin); B. Wierenga (Berend)

    1991-01-01

    textabstractThe authors build on the idea put forward by Shugan to infer product maps from scanning data. They demonstrate that the actual estimation procedure used by Shugan has several methodological problems and may yield unstable estimates. They propose an alternative estimation procedure, full-

  4. MAXIMUM LIKELIHOOD CURVES FOR MULTIPLE OBJECTS EXTRACTION: APPLICATION TO RADIOGRAPHIC INSPECTION FOR WELD DEFECTS DETECTION

    OpenAIRE

    Aicha Baya Goumeidane; Mohammed Khamadja; Nafaa Nacereddine

    2011-01-01

    This paper presents an adaptive probabilistic region-based deformable model using an explicit representation that aims to extract automatically defects from a radiographic film. To deal with the height computation cost of such model, an adaptive polygonal representation is used and the search space for the greedy-based model evolution is reduced. Furthermore, we adapt this explicit model to handle topological changes in presence of multiple defects.

  5. The Finite Population Bootstrap - From the Maximum Likelihood to the Horvitz-Thompson Approach

    Directory of Open Access Journals (Sweden)

    Andreas Quatember

    2014-06-01

    Full Text Available The finite population bootstrap method is used as a computer-intensive alternative to estimate the sampling distribution of a sample statis-tic. The generation of a so-called “bootstrap population” is the necessarystep between the original sample drawn and the resamples needed to mimicthis distribution. The most important question for researchers to answer ishow to create an adequate bootstrap population, which may serve as a close-to-reality basis for the resampling process. In this paper, a review of someapproaches to answer this fundamental question is presented. Moreover, anapproach based on the idea behind the Horvitz-Thompson estimator allow-ing not only whole units in the bootstrap population but also parts of wholeunits is proposed. In a simulation study, this method is compared with a moreheuristic technique from the bootstrap literature.

  6. Maximum likelihood estimation of neutral model parameters for multiple samples with different degrees of dispersal limitation

    NARCIS (Netherlands)

    Etienne, Rampal S.

    2009-01-01

    In a recent paper, I presented a sampling formula for species abundances from multiple samples according to the prevailing neutral model of biodiversity, but practical implementation for parameter estimation was only possible when these samples were from local communities that were assumed to be equ

  7. Pilot power optimization for AF relaying using maximum likelihood channel estimation

    KAUST Repository

    Wang, Kezhi

    2014-09-01

    Bit error rates (BERs) for amplify-and-forward (AF) relaying systems with two different pilot-symbol-aided channel estimation methods, disintegrated channel estimation (DCE) and cascaded channel estimation (CCE), are derived in Rayleigh fading channels. Based on these BERs, the pilot powers at the source and at the relay are optimized when their total transmitting powers are fixed. Numerical results show that the optimized system has a better performance than other conventional nonoptimized allocation systems. They also show that the optimal pilot power in variable gain is nearly the same as that in fixed gain for similar system settings. andcopy; 2014 IEEE.

  8. Block Network Error Control Codes and Syndrome-based Complete Maximum Likelihood Decoding

    CERN Document Server

    Bahramgiri, Hossein

    2008-01-01

    In this paper, network error control coding is studied for robust and efficient multicast in a directed acyclic network with imperfect links. The block network error control coding framework, BNEC, is presented and the capability of the scheme to correct a mixture of symbol errors and packet erasures and to detect symbol errors is studied. The idea of syndrome-based decoding and error detection is introduced for BNEC, which removes the effect of input data and hence decreases the complexity. Next, an efficient three-stage syndrome-based BNEC decoding scheme for network error correction is proposed, in which prior to finding the error values, the position of the edge errors are identified based on the error spaces at the receivers. In addition to bounded-distance decoding schemes for error correction up to the refined Singleton bound, a complete decoding scheme for BNEC is also introduced. Specifically, it is shown that using the proposed syndrome-based complete decoding, a network error correcting code with r...

  9. Real-time Data Acquisition and Maximum-Likelihood Estimation for Gamma Cameras

    OpenAIRE

    Furenlid, L R; Hesterman, J.Y.; Barrett, H. H.

    2005-01-01

    We have developed modular gamma-ray cameras for biomedical imaging that acquire data with a raw list-mode acquisition architecture. All observations associated with a gamma-ray event, such as photomultiplier (PMT) signals and time, are assembled into an event packet and added to an ordered list of event entries that comprise the acquired data. In this work we present the design of the data-acquisition system, and discuss algorithms for a specialized computing engine to reside in the data path...

  10. Stochastic identification using the maximum likelihood method and a statistical reduction: application to drilling dynamics

    OpenAIRE

    Ritto, T. G.; Soize, Christian; Sampaio, R

    2010-01-01

    A drill-string is a slender structure that drills rock to search for oil. The nonlinear interaction between the bit and the rock is of great importance for the drill-string dynamics. The interaction model has uncertainties, which are modeled using the nonparametric probabilistic approach. This paper deals with a procedure to perform the identification of the dispersion parameter of the probabilistic model of uncertainties of a bit-rock interaction model. The bit-rock interaction model is repr...

  11. maxLik: A package for maximum likelihood estimation in R

    DEFF Research Database (Denmark)

    Henningsen, Arne; Toomet, Ott

    2011-01-01

    This paper describes the package maxLik for the statistical environment R. The package is essentially a unified wrapper interface to various optimization routines, offering easy access to likelihood-specific features like standard errors or information matrix equality (BHHH method). More advanced...

  12. GooFit: A library for massively parallelising maximum-likelihood fits

    International Nuclear Information System (INIS)

    Fitting complicated models to large datasets is a bottleneck of many analyses. We present GooFit, a library and tool for constructing arbitrarily-complex probability density functions (PDFs) to be evaluated on nVidia GPUs or on multicore CPUs using OpenMP. The massive parallelisation of dividing up event calculations between hundreds of processors can achieve speedups of factors 200-300 in real-world problems.

  13. Maximum-likelihood density modification using pattern recognition of structural motifs

    International Nuclear Information System (INIS)

    A likelihood-based density-modification method is extended to include pattern recognition of structural motifs. The likelihood-based approach to density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] is extended to include the recognition of patterns of electron density. Once a region of electron density in a map is recognized as corresponding to a known structural element, the likelihood of the map is reformulated to include a term that reflects how closely the map agrees with the expected density for that structural element. This likelihood is combined with other aspects of the likelihood of the map, including the presence of a flat solvent region and the electron-density distribution in the protein region. This likelihood-based pattern-recognition approach was tested using the recognition of helical segments in a largely helical protein. The pattern-recognition method yields a substantial phase improvement over both conventional and likelihood-based solvent-flattening and histogram-matching methods. The method can potentially be used to recognize any common structural motif and incorporate prior knowledge about that motif into density modification

  14. Maximum likelihood positioning in the scintillation camera using depth of interaction

    International Nuclear Information System (INIS)

    The spatial (X and Y) dependence of the photomultiplier (PM) response in Anger gamma camera has been thoroughly described in the past. The light distribution to individual PM in gamma cameras--solid angle seen by each photocathode--being a truly three-dimensional problem, the depth of interaction (DOI) has to be included in the analysis of the PM output. Furthermore, DOI being a stochastic process, it has to be considered explicitly, on a event-by-event basis, while evaluating both position and energy. Specific effects of the DOI on the PM response have been quantified. The method was implemented and tested on a Monte Carlo simulator with special care to the noise modeling. Two models were developed, a first one considering only the geometric aspects of the camera and used for comparison, and a second one describing a more realistic camera environment. In a typical camera configuration and 140 keV photons, the DOI alone can account for a 6.4 mm discrepancy in position and 12% in energy between two scintillations. Variation of the DOI can still bring additional distortions when photons do not enter the crystal perpendicularly such as in slant hole, cone beam and other focusing collimators. With a 0.95 cm crystal and a 30 degree slant angle, the obliquity factor can be responsible for a 5.5 mm variation in the event position. Results indicate that both geometrical and stochastic effects of the DOI are definitely reducing the camera performances and should be included in the image formation process

  15. Estimation of Spatial Sample Selection Models : A Partial Maximum Likelihood Approach

    NARCIS (Netherlands)

    Rabovic, Renata; Cizek, Pavel

    2016-01-01

    To analyze data obtained by non-random sampling in the presence of cross-sectional dependence, estimation of a sample selection model with a spatial lag of a latent dependent variable or a spatial error in both the selection and outcome equations is considered. Since there is no estimation framework

  16. A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    performance evaluation on in-vivo data further reveals that the number of highly deviating velocity estimates in the tissue parts of the RF-signals are reduced with the STC-MLE. In general the resulting profiles are continuous and more consistent with the true velocity profile, and the introduction of the...

  17. Maximum-likelihood reconstruction of photon returns from simultaneous analog and photon-counting lidar measurements

    CERN Document Server

    Veberic, Darko

    2011-01-01

    We present a novel method for combining the analog and photon-counting measurements of lidar transient recorders into reconstructed photon returns. The method takes into account the statistical properties of the two measurement modes and estimates the most likely number of arriving photons and the most likely values of acquisition parameters describing the two measurement modes. It extends and improves the standard combining ("gluing") methods and does not rely on any ad hoc definitions of the overlap region nor on any ackground subtraction methods.

  18. Maximum Likelihood Estimation in Latent Class Models For Contingency Table Data

    OpenAIRE

    Fienberg, S.E.; Hersh, P.; Rinaldo, A.; Zhou, Y

    2007-01-01

    Statistical models with latent structure have a history going back to the 1950s and have seen widespread use in the social sciences and, more recently, in computational biology and in machine learning. Here we study the basic latent class model proposed originally by the sociologist Paul F. Lazarfeld for categorical variables, and we explain its geometric structure. We draw parallels between the statistical and geometric properties of latent class models and we illustrate geometrically the ca...

  19. Estimating Water Demand in Urban Indonesia: A Maximum Likelihood Approach to block Rate Pricing Data

    NARCIS (Netherlands)

    Rietveld, Piet; Rouwendal, Jan; Zwart, Bert

    1997-01-01

    In this paper the Burtless and Hausman model is used to estimate water demand in Salatiga, Indonesia. Other statistical models, as OLS and IV, are found to be inappropiate. A topic, which does not seem to appear in previous studies, is the fact that the density function of the loglikelihood can be m

  20. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    CERN Document Server

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  1. Maximum likelihood estimators for extended growth curve model with orthogonal between-individual design matrices

    NARCIS (Netherlands)

    Klein, Daniel; Zezula, Ivan

    2015-01-01

    The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design matri

  2. Maximum Likelihood based comparison of the specific growth rates for P. aeruginosa and four mutator strains

    DEFF Research Database (Denmark)

    Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard; Ciufo, Oana; Madsen, Henrik

    2008-01-01

    exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model that...

  3. Maximum Likelihood Estimation in the Tensor Normal Model with a Structured Mean

    OpenAIRE

    Nzabanita, Joseph; von Rosen, Dietrich; Singull, Martin

    2015-01-01

    There is a growing interest in the analysis of multi-way data. In some studies the inference about the dependencies in three-way data is done using the third order tensor normal model, where the focus is on the estimation of the variance-covariance matrix which has a Kronecker product structure. Little attention is paid to the structure of the mean, though, there is a potential to improve the analysis by assuming a structured mean. In this paper, we introduce a 2-fold growth curve model by as...

  4. Improved Maximum Likelihood S-FSK Receiver for PLC Modem in AMR

    Directory of Open Access Journals (Sweden)

    Mohamed Chaker Bali

    2012-01-01

    Full Text Available This paper deals with an optimized software implementation of a narrowband power line modem. The modem is a node in automatic meter reading (AMR system compliant to IEC 61334-5-1 profile and operates in the CENELEC-A band. Because of the hostile communication environments of power line channel, a new design approach is carried out for an S-FSK demodulator capable of providing lower bit error rate (BER than standard specifications. The best compromise between efficiency and architecture complexity is investigated in this paper. Some implementation results are presented to show that a communication throughput of 9.6 kbps is reachable with the designed S-FSK modem.

  5. Rectilinear Full Steiner Tree Generation

    DEFF Research Database (Denmark)

    Zachariasen, Martin

    1999-01-01

    The fastest exact algorithm (in practice) for the rectilinear Steiner tree problem in the plane uses a two-phase scheme: First, a small but sufficient set of full Steiner trees (FSTs) is generated and then a Steiner minimum tree is constructed from this set by using simple backtrack search, dynam...... generated instances, approximately 4n FSTs are generated (where n is the number of terminals). The observed running time is quadratic and the FSTs for a 10,000 terminal instance can, on average, be generated within 5 minutes....

  6. Fault trees

    International Nuclear Information System (INIS)

    Fault trees are a method of deductive analysis and a means of graphic representation of the reliability and security of systems. The principles of the method are set out and the main points illustrated by many examples of electrical systems, fluids, and mechanical systems as well as everyday occurrences. In addition, some advice is given on the use of the method

  7. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  8. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  9. Unimodular Trees versus Einstein Trees

    CERN Document Server

    Alvarez, Enrique; Martin, Carmelo P

    2016-01-01

    The maximally helicity violating (MHV) tree level scattering amplitudes involving three, four or five gravitons are worked out in Unimodular Gravity. They are found to coincide with the corresponding amplitudes in General Relativity. This a remarkable result, insofar as both the propagators and the vertices are quite different in both theories.

  10. Multiple Tree for Partially Observable Monte-Carlo Tree Search

    OpenAIRE

    Auger, David

    2011-01-01

    We propose an algorithm for computing approximate Nash equilibria of partially observable games using Monte-Carlo tree search based on recent bandit methods. We obtain experimental results for the game of phantom tic-tac-toe, showing that strong strategies can be efficiently computed by our algorithm.

  11. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  12. TreeFam: a curated database of phylogenetic trees of animal gene families

    DEFF Research Database (Denmark)

    Li, Heng; Coghlan, Avril; Ruan, Jue;

    2006-01-01

    , based on seed alignments and trees in a similar fashion to Pfam. Release 1.1 of TreeFam contains curated trees for 690 families and automatically generated trees for another 11 646 families. These represent over 128 000 genes from nine fully sequenced animal genomes and over 45 000 other animal proteins......TreeFam is a database of phylogenetic trees of gene families found in animals. It aims to develop a curated resource that presents the accurate evolutionary history of all animal gene families, as well as reliable ortholog and paralog assignments. Curated families are being added progressively...... from UniProt; approximately 40-85% of proteins encoded in the fully sequenced animal genomes are included in TreeFam. TreeFam is freely available at http://www.treefam.org and http://treefam.genomics.org.cn. Udgivelsesdato: 2006-Jan-1...

  13. The Impact of Missing Data on Species Tree Estimation.

    Science.gov (United States)

    Xi, Zhenxiang; Liu, Liang; Davis, Charles C

    2016-03-01

    Phylogeneticists are increasingly assembling genome-scale data sets that include hundreds of genes to resolve their focal clades. Although these data sets commonly include a moderate to high amount of missing data, there remains no consensus on their impact to species tree estimation. Here, using several simulated and empirical data sets, we assess the effects of missing data on species tree estimation under varying degrees of incomplete lineage sorting (ILS) and gene rate heterogeneity. We demonstrate that concatenation (RAxML), gene-tree-based coalescent (ASTRAL, MP-EST, and STAR), and supertree (matrix representation with parsimony [MRP]) methods perform reliably, so long as missing data are randomly distributed (by gene and/or by species) and that a sufficiently large number of genes are sampled. When data sets are indecisive sensu Sanderson et al. (2010. Phylogenomics with incomplete taxon coverage: the limits to inference. BMC Evol Biol. 10:155) and/or ILS is high, however, high amounts of missing data that are randomly distributed require exhaustive levels of gene sampling, likely exceeding most empirical studies to date. Moreover, missing data become especially problematic when they are nonrandomly distributed. We demonstrate that STAR produces inconsistent results when the amount of nonrandom missing data is high, regardless of the degree of ILS and gene rate heterogeneity. Similarly, concatenation methods using maximum likelihood can be misled by nonrandom missing data in the presence of gene rate heterogeneity, which becomes further exacerbated when combined with high ILS. In contrast, ASTRAL, MP-EST, and MRP are more robust under all of these scenarios. These results underscore the importance of understanding the influence of missing data in the phylogenomics era. PMID:26589995

  14. Object based technique for delineating and mapping 15 tree species using VHR WorldView-2 imagery

    Science.gov (United States)

    Mustafa, Yaseen T.; Habeeb, Hindav N.

    2014-10-01

    Monitoring and analyzing forests and trees are required task to manage and establish a good plan for the forest sustainability. To achieve such a task, information and data collection of the trees are requested. The fastest way and relatively low cost technique is by using satellite remote sensing. In this study, we proposed an approach to identify and map 15 tree species in the Mangish sub-district, Kurdistan Region-Iraq. Image-objects (IOs) were used as the tree species mapping unit. This is achieved using the shadow index, normalized difference vegetation index and texture measurements. Four classification methods (Maximum Likelihood, Mahalanobis Distance, Neural Network, and Spectral Angel Mapper) were used to classify IOs using selected IO features derived from WorldView-2 imagery. Results showed that overall accuracy was increased 5-8% using the Neural Network method compared with other methods with a Kappa coefficient of 69%. This technique gives reasonable results of various tree species classifications by means of applying the Neural Network method with IOs techniques on WorldView-2 imagery.

  15. Evaluation of Gaussian approximations for data assimilation in reservoir models

    KAUST Repository

    Iglesias, Marco A.

    2013-07-14

    The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our

  16. Dynamic Spatial Approximation Trees with clusters for secondary memory

    OpenAIRE

    Britos, Luís; Printista, Alicia Marcela; Reyes, Nora Susana

    2010-01-01

    Metric space searching is an emerging technique to address the problem of e cient similarity searching in many applications, including multimedia databases and other repositories handling complex objects. Although promising, the metric space approach is still immature in several aspects that are well established in traditional databases. In particular, most indexing schemes are not dynamic. From the few dynamic indexes, even fewer work well in secondary memory. That is, most of them need the ...

  17. Modular Tree Automata

    DEFF Research Database (Denmark)

    Bahr, Patrick

    2012-01-01

    Tree automata are traditionally used to study properties of tree languages and tree transformations. In this paper, we consider tree automata as the basis for modular and extensible recursion schemes. We show, using well-known techniques, how to derive from standard tree automata highly modular r...

  18. Bronchi, Bronchial Tree, & Lungs

    Science.gov (United States)

    ... specific Modules Resources Archived Modules Updates Bronchi, Bronchial Tree, & Lungs Bronchi and Bronchial Tree In the mediastinum , at the level of the ... trachea. As the branching continues through the bronchial tree, the amount of hyaline cartilage in the walls ...

  19. Identification and characterization of toll-like receptors (TLRs) in the Chinese tree shrew (Tupaia belangeri chinensis).

    Science.gov (United States)

    Yu, Dandan; Wu, Yong; Xu, Ling; Fan, Yu; Peng, Li; Xu, Min; Yao, Yong-Gang

    2016-07-01

    In mammals, the toll-like receptors (TLRs) play a major role in initiating innate immune responses against pathogens. Comparison of the TLRs in different mammals may help in understanding the TLR-mediated responses and developing of animal models and efficient therapeutic measures for infectious diseases. The Chinese tree shrew (Tupaia belangeri chinensis), a small mammal with a close relationship to primates, is a viable experimental animal for studying viral and bacterial infections. In this study, we characterized the TLRs genes (tTLRs) in the Chinese tree shrew and identified 13 putative TLRs, which are orthologs of mammalian TLR1-TLR9 and TLR11-TLR13, and TLR10 was a pseudogene in tree shrew. Positive selection analyses using the Maximum likelihood (ML) method showed that tTLR8 and tTLR9 were under positive selection, which might be associated with the adaptation to the pathogen challenge. The mRNA expression levels of tTLRs presented an overall low and tissue-specific pattern, and were significantly upregulated upon Hepatitis C virus (HCV) infection. tTLR4 and tTLR9 underwent alternative splicing, which leads to different transcripts. Phylogenetic analysis and TLR structure prediction indicated that tTLRs were evolutionarily conserved, which might reflect an ancient mechanism and structure in the innate immune response system. Taken together, TLRs had both conserved and unique features in the Chinese tree shrew. PMID:26923770

  20. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Directory of Open Access Journals (Sweden)

    Jose Javier Gorgoso-Varela

    2016-04-01

    Full Text Available Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights.Area of study: North-West of Spain.Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill. stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution.Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic.Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass.

  1. Fuzzy Approximating Spaces

    OpenAIRE

    Bin Qin

    2014-01-01

    Relationships between fuzzy relations and fuzzy topologies are deeply researched. The concept of fuzzy approximating spaces is introduced and decision conditions that a fuzzy topological space is a fuzzy approximating space are obtained.

  2. Stochastic approximation: invited paper

    OpenAIRE

    Lai, Tze Leung

    2003-01-01

    Stochastic approximation, introduced by Robbins and Monro in 1951, has become an important and vibrant subject in optimization, control and signal processing. This paper reviews Robbins' contributions to stochastic approximation and gives an overview of several related developments.

  3. Approximate flavor symmetries

    CERN Document Server

    Rasin, A

    1994-01-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  4. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  5. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  6. Sofic Tree-Shifts

    OpenAIRE

    Aubrun, Nathalie; Béal, Marie-Pierre

    2013-01-01

    We introduce the notion of sofic tree-shifts which corresponds to symbolic dynamical systems of infinite ranked trees accepted by finite tree automata. We show that, contrary to shifts of infinite sequences, there is no unique reduced deterministic irreducible tree automaton accepting an irreducible sofic tree-shift, but that there is a unique synchronized one, called the Fischer automaton of the tree-shift. We define the notion of almost of finite type tree-shift which are sofic tree-shifts accepted...

  7. A Practical Algorithm for the Minimum Rectilinear Steiner Tree

    Institute of Scientific and Technical Information of China (English)

    MA Jun; YANG Bo; MA Shaohan

    2000-01-01

    An O(n2) time approximation algorithm for the minimum rectilinear Steiner tree is proposed. The approximation ratio of the algorithm is strictly less than 1.5. The computing performances show the costs of the spanning trees produced by the algorithm are only 0.8% away from the optimal ones.

  8. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-12-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  9. Approximation of distributed delays

    CERN Document Server

    Lu, Hao; Eberard, Damien; Simon, Jean-Pierre

    2010-01-01

    We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.

  10. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  11. The canary tree

    OpenAIRE

    Mekler, Alan H.; Shelah, Saharon

    1993-01-01

    A canary tree is a tree of cardinality the continuum which has no uncountable branch, but gains a branch whenever a stationary set is destroyed (without adding reals). Canary trees are important in infinitary model theory. The existence of a canary tree is independent of ZFC + GCH.

  12. Healthy,Happy trees

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Healthy trees are important to us all. Trees provide shade, beauty, and homes for wildlife. Trees give us products like paper and wood. Trees can give us all this only if they are healthy.They must be well cared for to remain healthy.

  13. AN OPTIMAL FUZZY APPROXIMATOR

    Institute of Scientific and Technical Information of China (English)

    YueShihong; ZhangKecun

    2002-01-01

    In a dot product space with the reproducing kernel (r. k. S. ) ,a fuzzy system with the estimation approximation errors is proposed ,which overcomes the defect that the existing fuzzy control system is difficult to estimate the errors of approximation for a desired function,and keeps the characteristics of fuzzy system as an inference approach. The structure of the new fuzzy approximator benefits a course got by other means.

  14. Approximation of irrationals

    OpenAIRE

    Malvina Baica

    1985-01-01

    The author uses a new modification of Jacobi-Perron Algorithm which holds for complex fields of any degree (abbr. ACF), and defines it as Generalized Euclidean Algorithm (abbr. GEA) to approximate irrationals.This paper deals with approximation of irrationals of degree n=2,3,5. Though approximations of these irrationals in a variety of patterns are known, the results are new and practical, since there is used an algorithmic method.

  15. Expectation Consistent Approximate Inference

    OpenAIRE

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability distributions which are made consistent on a set of moments and encode different features of the original intractable distribution. In this way we are able to use Gaussian approximations for models with ...

  16. X-tree

    OpenAIRE

    Keim, Daniel A.; Bustos Cárdenas, Benjamin Eugenio; Berchtold, Stefan; Kriegel, Hans-Peter

    2008-01-01

    The X-tree (eXtended node tree) [1] is a spatial access method [2] that supports efficient query processing for high-dimensional data. It supports not only point data but also extended spatial data. The X-tree provides overlap-free split whenever it is possible without allowing the tree to degenerate; otherwise, the X-tree uses extended variable size directory nodes, so-called supernodes. The X-tree may be seen as a hybrid of a linear array-like and a hierarchical R-tree-like directory.

  17. TreeDT

    OpenAIRE

    Sevon, Petteri; Toivonen, Hannu; Ollikainen, Vesa

    2006-01-01

    We describe TreeDT, a novel association-based gene mapping method. Given a set of disease-associated haplotypes and a set of control haplotypes, TreeDT predicts likely locations of a disease susceptibility gene. TreeDT extracts, essentially in the form of haplotype trees, information about historical recombinations in the population: A haplotype tree constructed at a given chromosomal location is an estimate of the genealogy of the haplotypes. TreeDT constructs these trees for all locations o...

  18. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  19. Numerics of implied binomial trees

    OpenAIRE

    Härdle, Wolfgang Karl; Myšičková, Alena

    2008-01-01

    Market option prices in last 20 years confirmed deviations from the Black and Scholes (BS) models assumptions, especially on the BS implied volatility. Implied binomial trees (IBT) models capture the variations of the implied volatility known as \\volatility smile". They provide a discrete approximation to the continuous risk neutral process for the underlying assets. In this paper, we describe the numerical construction of IBTs by Derman and Kani (DK) and an alternative method by Barle and Ca...

  20. Stochastic Mixed-Effects Parameters Bertalanffy Process, with Applications to Tree Crown Width Modeling

    Directory of Open Access Journals (Sweden)

    Petras Rupšys

    2015-01-01

    Full Text Available A stochastic modeling approach based on the Bertalanffy law gained interest due to its ability to produce more accurate results than the deterministic approaches. We examine tree crown width dynamic with the Bertalanffy type stochastic differential equation (SDE and mixed-effects parameters. In this study, we demonstrate how this simple model can be used to calculate predictions of crown width. We propose a parameter estimation method and computational guidelines. The primary goal of the study was to estimate the parameters by considering discrete sampling of the diameter at breast height and crown width and by using maximum likelihood procedure. Performance statistics for the crown width equation include statistical indexes and analysis of residuals. We use data provided by the Lithuanian National Forest Inventory from Scots pine trees to illustrate issues of our modeling technique. Comparison of the predicted crown width values of mixed-effects parameters model with those obtained using fixed-effects parameters model demonstrates the predictive power of the stochastic differential equations model with mixed-effects parameters. All results were implemented in a symbolic algebra system MAPLE.

  1. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  2. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  3. Approximate Modified Policy Iteration

    CERN Document Server

    Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu

    2012-01-01

    Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...

  4. Using stochastic models calibrated from nanosecond nonequilibrium simulations to approximate mesoscale information

    Science.gov (United States)

    Calderon, Christopher P.; Janosi, Lorant; Kosztin, Ioan

    2009-04-01

    We demonstrate how the surrogate process approximation (SPA) method can be used to compute both the potential of mean force along a reaction coordinate and the associated diffusion coefficient using a relatively small number (10-20) of bidirectional nonequilibrium trajectories coming from a complex system. Our method provides confidence bands which take the variability of the initial configuration of the high-dimensional system, continuous nature of the work paths, and thermal fluctuations into account. Maximum-likelihood-type methods are used to estimate a stochastic differential equation (SDE) approximating the dynamics. For each observed time series, we estimate a new SDE resulting in a collection of SPA models. The physical significance of the collection of SPA models is discussed and methods for exploiting information in the population of estimated SPA models are demonstrated and suggested. Molecular dynamics simulations of potassium ion dynamics inside a gramicidin A channel are used to demonstrate the methodology, although SPA-type modeling has also proven useful in analyzing single-molecule experimental time series [J. Phys. Chem. B 113, 118 (2009)].

  5. Fault tree handbook

    International Nuclear Information System (INIS)

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  6. Approximations to toroidal harmonics

    International Nuclear Information System (INIS)

    Toroidal harmonics P/sub n-1/2/1(cosh μ) and Q/sub n-1/2/1(cosh μ) are useful in solutions to Maxwell's equations in toroidal coordinates. In order to speed their computation, a set of approximations has been developed that is valid over the range 0 -10. The simple method used to determine the approximations is described. Relative error curves are also presented, obtained by comparing approximations to the more accurate values computed by direct summation of the hypergeometric series

  7. Efficient Approximation of Convex Recolorings

    OpenAIRE

    Moran, Shlomo; Snir, Sagi

    2005-01-01

    A coloring of a tree is convex if the vertices that pertain to any color induce a connected subtree; a partial coloring (which assigns colors to some of the vertices) is convex if it can be completed to a convex (total) coloring. Convex coloring of trees arise in areas such as phylogenetics, linguistics, etc. eg, a perfect phylogenetic tree is one in which the states of each character induce a convex coloring of the tree. Research on perfect phylogeny is usually focused on finding a tree so t...

  8. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  9. Covering tree with stars

    DEFF Research Database (Denmark)

    Baumbach, Jan; Guo, Jian-Ying; Ibragimov, Rashid

    We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting...... tree is isomorphic to T? We prove that in the general setting, CST is NP-complete, which implies that the tree edit distance considered here is also NP-hard, even when both input trees having diameters bounded by 10. We also show that, when the number of distinct stars is bounded by a constant k, CTS...

  10. Covering tree with stars

    DEFF Research Database (Denmark)

    Baumbach, Jan; Guo, Jiong; Ibragimov, Rashid

    2015-01-01

    We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting...... tree is isomorphic to T? We prove that in the general setting, CST is NP-complete, which implies that the tree edit distance considered here is also NP-hard, even when both input trees having diameters bounded by 10. We also show that, when the number of distinct stars is bounded by a constant k, CTS...

  11. Approximations in Inspection Planning

    DEFF Research Database (Denmark)

    Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.; Bloch, Allan

    2000-01-01

    . One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found by the......Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations...... inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....

  12. The Karlqvist approximation revisited

    OpenAIRE

    Tannous, C.

    2015-01-01

    The Karlqvist approximation signaling the historical beginning of magnetic recording head theory is reviewed and compared to various approaches progressing from Green, Fourier, Conformal mapping that obeys the Sommerfeld edge condition at angular points and leads to exact results.

  13. Approximation Behooves Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel; Poulsen, Rolf

    2013-01-01

    Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....

  14. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  15. Approximate spatial reasoning

    Science.gov (United States)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  16. A Well-Resolved Phylogeny of the Trees of Puerto Rico Based on DNA Barcode Sequence Data

    Science.gov (United States)

    Muscarella, Robert; Uriarte, María; Erickson, David L.; Swenson, Nathan G.; Zimmerman, Jess K.; Kress, W. John

    2014-01-01

    Background The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i) low terminal phylogenetic resolution and (ii) arbitrarily defined species pools. Methodology/principal findings We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA) to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%). We used a maximum likelihood (ML) approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%). We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types) changed some of our results depending on which phylogeny (ML vs. Phylomatic) was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny. Conclusions/significance With the DNA barcode phylogeny presented here (based on an island-wide species pool), we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i) facilitate stronger inferences about the role of historical processes in governing the assembly and

  17. A well-resolved phylogeny of the trees of Puerto Rico based on DNA barcode sequence data.

    Directory of Open Access Journals (Sweden)

    Robert Muscarella

    Full Text Available The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i low terminal phylogenetic resolution and (ii arbitrarily defined species pools.We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%. We used a maximum likelihood (ML approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%. We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types changed some of our results depending on which phylogeny (ML vs. Phylomatic was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny.With the DNA barcode phylogeny presented here (based on an island-wide species pool, we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i facilitate stronger inferences about the role of historical processes in governing the assembly and composition of Puerto Rican forests, (ii provide insight into

  18. Double unresolved approximations to multiparton scattering amplitudes

    International Nuclear Information System (INIS)

    We present approximations to tree-level multiparton scattering amplitudes which are appropriate when two partons are unresolved. These approximations are required for the analytic isolation of infrared singularities of n+2 parton scattering processes contributing to the next-to-next-to-leading order corrections to n jet cross sections. In each case the colour ordered matrix elements factorise and yield a function containing the singular factors multiplying the n-parton amplitudes. When the unresolved particles are colour unconnected, the approximations are simple products of the familiar eikonal and Altarelli-Parisi splitting functions used to describe single unresolved emission. However, when the unresolved particles are colour connected the factorisation is more complicated and we introduce new and general functions to describe the triple collinear and soft/collinear limits in addition to the known double soft gluon limits of Berends and Giele. As expected the triple collinear splitting functions obey an N=1 SUSY identity. To illustrate the use of these double unresolved approximations, we have examined the singular limits of the tree-level matrix elements for e+e- →5 partons when only three partons are resolved. When integrated over the unresolved regions of phase space, these expressions will be of use in evaluating the O(αs3) corrections to the three-jet rate in electron-positron annihilation. (orig.)

  19. Evolution of tree nutrition.

    Science.gov (United States)

    Raven, John A; Andrews, Mitchell

    2010-09-01

    Using a broad definition of trees, the evolutionary origins of trees in a nutritional context is considered using data from the fossil record and molecular phylogeny. Trees are first known from the Late Devonian about 380 million years ago, originated polyphyletically at the pteridophyte grade of organization; the earliest gymnosperms were trees, and trees are polyphyletic in the angiosperms. Nutrient transporters, assimilatory pathways, homoiohydry (cuticle, intercellular gas spaces, stomata, endohydric water transport systems including xylem and phloem-like tissue) and arbuscular mycorrhizas preceded the origin of trees. Nutritional innovations that began uniquely in trees were the seed habit and, certainly (but not necessarily uniquely) in trees, ectomycorrhizas, cyanobacterial, actinorhizal and rhizobial (Parasponia, some legumes) diazotrophic symbioses and cluster roots. PMID:20581011

  20. Uniform random spanning trees

    OpenAIRE

    Pemantle, Robin

    2004-01-01

    There are several good reasons you might want to read about uniform spanning trees, one being that spanning trees are useful combinatorial objects. Not only are they fundamental in algebraic graph theory and combinatorial geometry, but they predate both of these subjects, having been used by Kirchoff in the study of resistor networks. This article addresses the question about spanning trees most natural to anyone in probability theory, namely what does a typical spanning tree look like?

  1. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....

  2. Diophantine approximations on fractals

    CERN Document Server

    Einsiedler, Manfred; Shapira, Uri

    2009-01-01

    We exploit dynamical properties of diagonal actions to derive results in Diophantine approximations. In particular, we prove that the continued fraction expansion of almost any point on the middle third Cantor set (with respect to the natural measure) contains all finite patterns (hence is well approximable). Similarly, we show that for a variety of fractals in [0,1]^2, possessing some symmetry, almost any point is not Dirichlet improvable (hence is well approximable) and has property C (after Cassels). We then settle by similar methods a conjecture of M. Boshernitzan saying that there are no irrational numbers x in the unit interval such that the continued fraction expansions of {nx mod1 : n is a natural number} are uniformly eventually bounded.

  3. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  4. Accuracy of Approximate Eigenstates

    CERN Document Server

    Lucha, Wolfgang; Lucha, Wolfgang

    2000-01-01

    Besides perturbation theory, which requires, of course, the knowledge of the exact unperturbed solution, variational techniques represent the main tool for any investigation of the eigenvalue problem of some semibounded operator H in quantum theory. For a reasonable choice of the employed trial subspace of the domain of H, the lowest eigenvalues of H usually can be located with acceptable precision whereas the trial-subspace vectors corresponding to these eigenvalues approximate, in general, the exact eigenstates of H with much less accuracy. Accordingly, various measures for the accuracy of the approximate eigenstates derived by variational techniques are scrutinized. In particular, the matrix elements of the commutator of the operator H and (suitably chosen) different operators, with respect to degenerate approximate eigenstates of H obtained by some variational method, are proposed here as new criteria for the accuracy of variational eigenstates. These considerations are applied to that Hamiltonian the eig...

  5. Synthesis of approximation errors

    Energy Technology Data Exchange (ETDEWEB)

    Bareiss, E.H.; Michel, P.

    1977-07-01

    A method is developed for the synthesis of the error in approximations in the large of regular and irregular functions. The synthesis uses a small class of dimensionless elementary error functions which are weighted by the coefficients of the expansion of the regular part of the function. The question is answered whether a computer can determine the analytical nature of a solution by numerical methods. It is shown that continuous least-squares approximations of irregular functions can be replaced by discrete least-squares approximation and how to select the discrete points. The elementary error functions are used to show how the classical convergence criterions can be markedly improved. There are eight numerical examples included, 30 figures and 74 tables.

  6. The Wish Tree Project

    Science.gov (United States)

    Brooks, Sarah DeWitt

    2010-01-01

    This article describes the author's experience in implementing a Wish Tree project in her school in an effort to bring the school community together with a positive art-making experience during a potentially stressful time. The concept of a wish tree is simple: plant a tree; provide tags and pencils for writing wishes; and encourage everyone to…

  7. Diary of a Tree.

    Science.gov (United States)

    Srulowitz, Frances

    1992-01-01

    Describes an activity to develop students' skills of observation and recordkeeping by studying the growth of a tree's leaves during the spring. Children monitor the growth of 11 tress over a 2-month period, draw pictures of the tree at different stages of growth, and write diaries of the tree's growth. (MDH)

  8. Total well dominated trees

    DEFF Research Database (Denmark)

    Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.

    cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....

  9. D2-tree

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Sioutas, Spyros; Pantazos, Kostas;

    2015-01-01

    We present a new overlay, called the Deterministic Decentralized tree (D2-tree). The D2-tree compares favorably to other overlays for the following reasons: (a) it provides matching and better complexities, which are deterministic for the supported operations; (b) the management of nodes (peers...

  10. Nearly Optimal Solution for Restricted Euclidean Bottleneck Steiner Tree Problem

    Directory of Open Access Journals (Sweden)

    Zimao Li

    2014-04-01

    Full Text Available A variation of the traditional Steiner tree problem, the bottleneck Steiner tree problem is considered in this paper, which asks to find a Steiner tree for n terminals with at most k Steiner points such that the length of the longest edge in the tree is minimized. The problem has applications in the design of WDM optical networks, design of wireless communication networks and reconstruction of phylogenetic tree in biology. We study a restricted version of the bottleneck Steiner tree problem in the Euclidean plane which requires that only degree-2 Steiner points are possibly adjacent in the optimal solution. The problem is known to be MAX-SNP hard and cannot be approximated within unless P=NP, we propose a nearly optimal randomized polynomial time approximation algorithm with performance ratio +e, where e is a positive number.

  11. The Zeldovich approximation

    CERN Document Server

    White, Martin

    2014-01-01

    This year marks the 100th anniversary of the birth of Yakov Zel'dovich. Amongst his many legacies is the Zel'dovich approximation for the growth of large-scale structure, which remains one of the most successful and insightful analytic models of structure formation. We use the Zel'dovich approximation to compute the two-point function of the matter and biased tracers, and compare to the results of N-body simulations and other Lagrangian perturbation theories. We show that Lagrangian perturbation theories converge well and that the Zel'dovich approximation provides a good fit to the N-body results except for the quadrupole moment of the halo correlation function. We extend the calculation of halo bias to 3rd order and also consider non-local biasing schemes, none of which remove the discrepancy. We argue that a part of the discrepancy owes to an incorrect prediction of inter-halo velocity correlations. We use the Zel'dovich approximation to compute the ingredients of the Gaussian streaming model and show that ...

  12. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  13. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM that...

  14. Kernel-Based Semantic Relation Detection and Classification via Enriched Parse Tree Structure

    Institute of Scientific and Technical Information of China (English)

    Guo-Dong Zhou; Qiao-Ming Zhu

    2011-01-01

    This paper proposes a tree kernel method of semantic relation detection and classification (RDC) between named entities. It resolves two critical problems in previous tree kernel methods of RDC. First, a new tree kernel is presented to better capture the inherent structural information in a parse tree by enabling the standard convolution tree kernel with context-sensitiveness and approximate matching of sub-trees. Second, an enriched parse tree structure is proposed to well derive necessary structural information, e.g., proper latent annotations, from a parse tree. Evaluation on the ACE RDC corpora shows that both the new tree kernel and the enriched parse tree structure contribute significantly to RDC and our tree kernel method much outperforms the state-of-the-art ones.

  15. Uniform Stability of a Particle Approximation of the Optimal Filter Derivative

    CERN Document Server

    Del Moral, Pierre; Singh, Sumeetpal

    2011-01-01

    Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance charact...

  16. Trees in Lhasa

    Institute of Scientific and Technical Information of China (English)

    Degyi

    2008-01-01

    Trees are flourishing in Lhasa wherever the history exists. There is such a man. He has already been through cus-toms after his annual trek to Lhasa, which he has been doing for over twenty years in succession to visit his tree.Although he has been making this journey for so long,it is neither to visit friends or family,nor is it his hometown.It is a tree that is tied so profoundly to his heart.When the wind blows fiercely on the bare tree and winter snow falls,he stands be-fore the tree with tears of jo...

  17. Distributed Contour Trees

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy; Weber, Gunther H.

    2014-03-31

    Topological techniques provide robust tools for data analysis. They are used, for example, for feature extraction, for data de-noising, and for comparison of data sets. This chapter concerns contour trees, a topological descriptor that records the connectivity of the isosurfaces of scalar functions. These trees are fundamental to analysis and visualization of physical phenomena modeled by real-valued measurements. We study the parallel analysis of contour trees. After describing a particular representation of a contour tree, called local{global representation, we illustrate how di erent problems that rely on contour trees can be solved in parallel with minimal communication.

  18. Evaluating Model-based Trees in Practice

    OpenAIRE

    Zeileis, Achim; Hothorn, Torsten; Hornik, Kurt

    2006-01-01

    A recently suggested algorithm for recursive partitioning of statistical models (Zeileis, Hothorn and Hornik, 2005), such as models estimated by maximum likelihood or least squares, is evaluated in practice. The general algorithm is applied to linear regression, logisitic regression and survival regression and applied to economical and medical regression problems. Furthermore, its performance with respect to prediction quality and model complexity is compared in a benchmark study with a large...

  19. Growth of a Pine Tree

    Science.gov (United States)

    Rollinson, Susan Wells

    2012-01-01

    The growth of a pine tree is examined by preparing "tree cookies" (cross-sectional disks) between whorls of branches. The use of Christmas trees allows the tree cookies to be obtained with inexpensive, commonly available tools. Students use the tree cookies to investigate the annual growth of the tree and how it corresponds to the number of whorls…

  20. Restricted maximum likelihood analysis of linkage between genetic markers and quantitative trait loci for a granddaughter design.

    NARCIS (Netherlands)

    Arendonk, van J.A.M.; Tier, B.; Bink, M.C.A.M.; Bovenhuis, H.

    1998-01-01

    REML for the estimation of location and variance of a single quantitative trait locus, together with polygenic and residual variance, is described for the analysis of a granddaughter design. The method is based on a mixed linear model that includes the allelic effects of the quantitative trait locus

  1. A combined maximum-likelihood analysis of the high-energy astrophysical neutrino flux measured with IceCube

    CERN Document Server

    Aartsen, M G; Ackermann, M; Adams, J; Aguilar, J A; Ahlers, M; Ahrens, M; Altmann, D; Anderson, T; Archinger, M; Arguelles, C; Arlen, T C; Auffenberg, J; Bai, X; Barwick, S W; Baum, V; Bay, R; Beatty, J J; Tjus, J Becker; Becker, K -H; Beiser, E; BenZvi, S; Berghaus, P; Berley, D; Bernardini, E; Bernhard, A; Besson, D Z; Binder, G; Bindig, D; Bissok, M; Blaufuss, E; Blumenthal, J; Boersma, D J; Bohm, C; Börner, M; Bos, F; Bose, D; Böser, S; Botner, O; Braun, J; Brayeur, L; Bretz, H -P; Brown, A M; Buzinsky, N; Casey, J; Casier, M; Cheung, E; Chirkin, D; Christov, A; Christy, B; Clark, K; Classen, L; Coenders, S; Cowen, D F; Silva, A H Cruz; Daughhetee, J; Davis, J C; Day, M; de André, J P A M; De Clercq, C; Dembinski, H; De Ridder, S; Desiati, P; de Vries, K D; de Wasseige, G; de With, M; DeYoung, T; Díaz-Vélez, J C; Dumm, J P; Dunkman, M; Eagan, R; Eberhardt, B; Ehrhardt, T; Eichmann, B; Euler, S; Evenson, P A; Fadiran, O; Fahey, S; Fazely, A R; Fedynitch, A; Feintzeig, J; Felde, J; Filimonov, K; Finley, C; Fischer-Wasels, T; Flis, S; Fuchs, T; Gaisser, T K; Gaior, R; Gallagher, J; Gerhardt, L; Ghorbani, K; Gier, D; Gladstone, L; Glagla, M; Glüsenkamp, T; Goldschmidt, A; Golup, G; Gonzalez, J G; Goodman, J A; Góra, D; Grant, D; Gretskov, P; Groh, J C; Groß, A; Ha, C; Haack, C; Ismail, A Haj; Hallgren, A; Halzen, F; Hansmann, B; Hanson, K; Hebecker, D; Heereman, D; Helbing, K; Hellauer, R; Hellwig, D; Hickford, S; Hignight, J; Hill, G C; Hoffman, K D; Hoffmann, R; Holzapfel, K; Homeier, A; Hoshina, K; Huang, F; Huber, M; Huelsnitz, W; Hulth, P O; Hultqvist, K; In, S; Ishihara, A; Jacobi, E; Japaridze, G S; Jero, K; Jurkovic, M; Kaminsky, B; Kappes, A; Karg, T; Karle, A; Kauer, M; Keivani, A; Kelley, J L; Kemp, J; Kheirandish, A; Kiryluk, J; Kläs, J; Klein, S R; Kohnen, G; Kolanoski, H; Konietz, R; Koob, A; Köpke, L; Kopper, C; Kopper, S; Koskinen, D J; Kowalski, M; Krings, K; Kroll, G; Kroll, M; Kunnen, J; Kurahashi, N; Kuwabara, T; Labare, M; Lanfranchi, J L; Larson, M J; Lesiak-Bzdak, M; Leuermann, M; Leuner, J; Lünemann, J; Madsen, J; Maggi, G; Mahn, K B M; Maruyama, R; Mase, K; Matis, H S; Maunu, R; McNally, F; Meagher, K; Medici, M; Meli, A; Menne, T; Merino, G; Meures, T; Miarecki, S; Middell, E; Middlemas, E; Miller, J; Mohrmann, L; Montaruli, T; Morse, R; Nahnhauer, R; Naumann, U; Niederhausen, H; Nowicki, S C; Nygren, D R; Obertacke, A; Olivas, A; Omairat, A; O'Murchadha, A; Palczewski, T; Paul, L; Pepper, J A; Heros, C Pérez de los; Pfendner, C; Pieloth, D; Pinat, E; Posselt, J; Price, P B; Przybylski, G T; Pütz, J; Quinnan, M; Rädel, L; Rameez, M; Rawlins, K; Redl, P; Reimann, R; Relich, M; Resconi, E; Rhode, W; Richman, M; Richter, S; Riedel, B; Robertson, S; Rongen, M; Rott, C; Ruhe, T; Ruzybayev, B; Ryckbosch, D; Saba, S M; Sabbatini, L; Sander, H -G; Sandrock, A; Sandroos, J; Sarkar, S; Schatto, K; Scheriau, F; Schimp, M; Schmidt, T; Schmitz, M; Schoenen, S; Schöneberg, S; Schönwald, A; Schukraft, A; Schulte, L; Seckel, D; Seunarine, S; Shanidze, R; Smith, M W E; Soldin, D; Spiczak, G M; Spiering, C; Stahlberg, M; Stamatikos, M; Stanev, T; Stanisha, N A; Stasik, A; Stezelberger, T; Stokstad, R G; Stößl, A; Strahler, E A; Ström, R; Strotjohann, N L; Sullivan, G W; Sutherland, M; Taavola, H; Taboada, I; Ter-Antonyan, S; Terliuk, A; Tešić, G; Tilav, S; Toale, P A; Tobin, M N; Tosi, D; Tselengidou, M; Unger, E; Usner, M; Vallecorsa, S; Vandenbroucke, J; van Eijndhoven, N; Vanheule, S; van Santen, J; Veenkamp, J; Vehring, M; Voge, M; Vraeghe, M; Walck, C; Wallace, A; Wallraff, M; Wandkowsky, N; Weaver, C; Wendt, C; Westerhoff, S; Whelan, B J; Whitehorn, N; Wichary, C; Wiebe, K; Wiebusch, C H; Wille, L; Williams, D R; Wissing, H; Wolf, M; Wood, T R; Woschnagg, K; Xu, D L; Xu, X W; Xu, Y; Yanez, J P; Yodh, G; Yoshida, S; Zarzhitsky, P; Zoll, M

    2015-01-01

    Evidence for an extraterrestrial flux of high-energy neutrinos has now been found in multiple searches with the IceCube detector. The first solid evidence was provided by a search for neutrino events with deposited energies $\\gtrsim30$~TeV and interaction vertices inside the instrumented volume. Recent analyses suggest that the extraterrestrial flux extends to lower energies and is also visible with throughgoing, $\

  2. BIVOPROB: A Computer Program for Maximum-Likelihood Estimation of Bivariate Ordered-Probit Models for Censored Data

    OpenAIRE

    Calhoun, C. A.

    1989-01-01

    Despite the large number of models devoted to the statistical analysis of censored data, relatively little attention has been given to the case of censored discrete outcomes. In this paper, the author presents a technical description and user's guide to a computer program for estimating bivariate ordered-probit models for censored and uncensored data. The model and program are currently being applied in an analysis of World Fertility Survey data for Europe and the United States, and the resul...

  3. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    OpenAIRE

    Kyungsoo Kim; Sung-Ho Lim; Jaeseok Lee; Won-Seok Kang; Cheil Moon; Ji-Woong Choi

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of...

  4. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. PMID:27038887

  5. Real-time cardiac surface tracking from sparse samples using subspace clustering and maximum-likelihood linear regressors

    Science.gov (United States)

    Singh, Vimal; Tewfik, Ahmed H.

    2011-03-01

    Cardiac minimal invasive surgeries such as catheter based radio frequency ablation of atrial fibrillation requires high-precision tracking of inner cardiac surfaces in order to ascertain constant electrode-surface contact. Majority of cardiac motion tracking systems are either limited to outer surface or track limited slices/sectors of inner surface in echocardiography data which are unrealizable in MIS due to the varying resolution of ultrasound with depth and speckle effect. In this paper, a system for high accuracy real-time 3D tracking of both cardiac surfaces using sparse samples of outer-surface only is presented. This paper presents a novel approach to model cardiac inner surface deformations as simple functions of outer surface deformations in the spherical harmonic domain using multiple maximal-likelihood linear regressors. Tracking system uses subspace clustering to identify potential deformation spaces for outer surfaces and trains ML linear regressors using pre-operative MRI/CT scan based training set. During tracking, sparse-samples from outer surfaces are used to identify the active outer surface deformation space and reconstruct outer surfaces in real-time under least squares formulation. Inner surface is reconstructed using tracked outer surface with trained ML linear regressors. High-precision tracking and robustness of the proposed system are demonstrated through results obtained on a real patient dataset with tracking root mean square error <= (0.23 +/- 0.04)mm and <= (0.30 +/- 0.07)mm for outer & inner surfaces respectively.

  6. Maximum likelihood estimation of time to first event in the presence of data gaps and multiple events.

    Science.gov (United States)

    Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W

    2016-04-01

    We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration. PMID:23166160

  7. Evaluating treatment effectiveness under model misspecification: a comparison of targeted maximum likelihood estimation with bias-corrected matching

    OpenAIRE

    Kreif, N.; Gruber, S.; Radice, Rosalba; Grieve, R; J S Sekhon

    2014-01-01

    Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maxi...

  8. Simultaneous maximum-likelihood reconstruction for x-ray grating based phase-contrast tomography avoiding intermediate phase retrieval

    CERN Document Server

    Ritter, André; Durst, Jürgen; Gödel, Karl; Haas, Wilhelm; Michel, Thilo; Rieger, Jens; Weber, Thomas; Wucherer, Lukas; Anton, Gisela

    2013-01-01

    Phase-wrapping artifacts, statistical image noise and the need for a minimum amount of phase steps per projection limit the practicability of x-ray grating based phase-contrast tomography, when using filtered back projection reconstruction. For conventional x-ray computed tomography, the use of statistical iterative reconstruction algorithms has successfully reduced artifacts and statistical issues. In this work, an iterative reconstruction method for grating based phase-contrast tomography is presented. The method avoids the intermediate retrieval of absorption, differential phase and dark field projections. It directly reconstructs tomographic cross sections from phase stepping projections by the use of a forward projecting imaging model and an appropriate likelihood function. The likelihood function is then maximized with an iterative algorithm. The presented method is tested with tomographic data obtained through a wave field simulation of grating based phase-contrast tomography. The reconstruction result...

  9. Counting and Locating the Solutions of Polynomial Systems of Maximum Likelihood Equations, II: The Behrens-Fisher Problem

    OpenAIRE

    Buot, Max-Louis G.; Hosten, Serkan; Richards, Donald St. P.

    2007-01-01

    Let $\\mu$ be a $p$-dimensional vector, and let $\\Sigma_1$ and $\\Sigma_2$ be $p \\times p$ positive definite covariance matrices. On being given random samples of sizes $N_1$ and $N_2$ from independent multivariate normal populations $N_p(\\mu,\\Sigma_1)$ and $N_p(\\mu,\\Sigma_2)$, respectively, the Behrens-Fisher problem is to solve the likelihood equations for estimating the unknown parameters $\\mu$, $\\Sigma_1$, and $\\Sigma_2$. We shall prove that for $N_1, N_2 > p$ there are, almost surely, exac...

  10. Maximum Likelihood Estimation of the Negative Binomial Dispersion Parameter for Highly Overdispersed Data, with Applications to Infectious Diseases

    OpenAIRE

    James O Lloyd-Smith

    2007-01-01

    Background. The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, κ. A substantial literature exists on the estimation of κ, but most attention has focused on datasets that are not highly overdispersed (i.e., those with κ≥1), and the accuracy of confidence intervals estimated for κ is typically not explored. Methodology. This article presents a simulation study explo...

  11. Maximum-likelihood joint image reconstruction and motion estimation with misaligned attenuation in TOF-PET/CT

    Science.gov (United States)

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F.; Thielemans, Kris

    2016-02-01

    This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.

  12. Maximum-likelihood estimation of lithospheric flexural rigidity, initial-loading fraction, and load correlation, under isotropy

    CERN Document Server

    Simons, Frederik J

    2012-01-01

    Topography and gravity are geophysical fields whose joint statistical structure derives from interface-loading processes modulated by the underlying mechanics of isostatic and flexural compensation in the shallow lithosphere. Under this dual statistical-mechanistic viewpoint an estimation problem can be formulated where the knowns are topography and gravity and the principal unknown the elastic flexural rigidity of the lithosphere. In the guise of an equivalent "effective elastic thickness", this important, geographically varying, structural parameter has been the subject of many interpretative studies, but precisely how well it is known or how best it can be found from the data, abundant nonetheless, has remained contentious and unresolved throughout the last few decades of dedicated study. The popular methods whereby admittance or coherence, both spectral measures of the relation between gravity and topography, are inverted for the flexural rigidity, have revealed themselves to have insufficient power to in...

  13. Maximum Likelihood Methods in Treating Outliers and Symmetrically Heavy-Tailed Distributions for Nonlinear Structural Equation Models with Missing Data

    Science.gov (United States)

    Lee, Sik-Yum; Xia, Ye-Mao

    2006-01-01

    By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…

  14. Analysis of Logic Programs Using Regular Tree Languages

    DEFF Research Database (Denmark)

    Gallagher, John Patrick

    2012-01-01

    The eld of nite tree automata provides fundamental notations and tools for reasoning about set of terms called regular or recognizable tree languages. We consider two kinds of analysis using regular tree languages, applied to logic programs. The rst approach is to try to discover automatically...... a tree automaton from a logic program, approximating its minimal Herbrand model. In this case the input for the analysis is a program, and the output is a tree automaton. The second approach is to expose or check properties of the program that can be expressed by a given tree automaton. The input...... to the analysis is a program and a tree automaton, and the output is an abstract model of the program. These two contrasting abstract interpretations can be used in a wide range of analysis and verication problems....

  15. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  16. From gene trees to species trees II: Species tree inference in the deep coalescence model

    OpenAIRE

    Zhang, Louxin

    2010-01-01

    When gene copies are sampled from various species, the resulting gene tree might disagree with the containing species tree. The primary causes of gene tree and species tree discord include lineage sorting, horizontal gene transfer, and gene duplication and loss. Each of these events yields a different parsimony criterion for inferring the (containing) species tree from gene trees. With lineage sorting, species tree inference is to find the tree minimizing extra gene lineages that had to coexi...

  17. Approximate level method

    OpenAIRE

    Richtárik, Peter

    2008-01-01

    In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewise-linear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising as a level set of the model. We show that by replacing exact computations in both cases by approximate computations, in relative scale, the theoretical ...

  18. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  19. Local approximate inference algorithms

    OpenAIRE

    Jung, Kyomin; Shah, Devavrat

    2006-01-01

    We present a new local approximation algorithm for computing Maximum a Posteriori (MAP) and log-partition function for arbitrary exponential family distribution represented by a finite-valued pair-wise Markov random field (MRF), say $G$. Our algorithm is based on decomposition of $G$ into {\\em appropriately} chosen small components; then computing estimates locally in each of these components and then producing a {\\em good} global solution. We show that if the underlying graph $G$ either excl...

  20. Fragments of approximate counting

    Czech Academy of Sciences Publication Activity Database

    Buss, S.R.; Kolodziejczyk, L.. A.; Thapen, Neil

    2014-01-01

    Roč. 79, č. 2 (2014), s. 496-525. ISSN 0022-4812 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : approximate counting * bounded arithmetic * ordering principle Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=9287274&fileId=S0022481213000376