Bayesian tracking of multiple point targets using expectation maximization
Selvan, Raghavendra
The range of applications where target tracking is useful has grown well beyond the classical military and radar-based tracking applications. With the increasing enthusiasm in autonomous solutions for vehicular and robotics navigation, much of the maneuverability can be provided based on solutions...... the measurements from sensors to choose the best data association hypothesis, from which the estimates of target trajectories can be obtained. In an ideal world, we could maintain all possible data association hypotheses from observing all measurements, and pick the best hypothesis. But, it turns out the number...... joint density is maximized over the data association variables, or over the target state variables, two EM-based algorithms for tracking multiple point targets are derived, implemented and evaluated. In the first algorithm, the data association variable is integrated out, and the target states...
Huang Yufei
2007-01-01
Full Text Available We investigate in this paper reverse engineering of gene regulatory networks from time-series microarray data. We apply dynamic Bayesian networks (DBNs for modeling cell cycle regulations. In developing a network inference algorithm, we focus on soft solutions that can provide a posteriori probability (APP of network topology. In particular, we propose a variational Bayesian structural expectation maximization algorithm that can learn the posterior distribution of the network model parameters and topology jointly. We also show how the obtained APPs of the network topology can be used in a Bayesian data integration strategy to integrate two different microarray data sets. The proposed VBSEM algorithm has been tested on yeast cell cycle data sets. To evaluate the confidence of the inferred networks, we apply a moving block bootstrap method. The inferred network is validated by comparing it to the KEGG pathway map.
Isabel Tienda Luna
2007-06-01
Full Text Available We investigate in this paper reverse engineering of gene regulatory networks from time-series microarray data. We apply dynamic Bayesian networks (DBNs for modeling cell cycle regulations. In developing a network inference algorithm, we focus on soft solutions that can provide a posteriori probability (APP of network topology. In particular, we propose a variational Bayesian structural expectation maximization algorithm that can learn the posterior distribution of the network model parameters and topology jointly. We also show how the obtained APPs of the network topology can be used in a Bayesian data integration strategy to integrate two different microarray data sets. The proposed VBSEM algorithm has been tested on yeast cell cycle data sets. To evaluate the confidence of the inferred networks, we apply a moving block bootstrap method. The inferred network is validated by comparing it to the KEGG pathway map.
Kuang Lin
2009-01-01
Full Text Available Understanding the mechanisms of gene transcriptional regulation through analysis of high-throughput postgenomic data is one of the central problems of computational systems biology. Various approaches have been proposed, but most of them fail to address at least one of the following objectives: (1 allow for the fact that transcription factors are potentially subject to posttranscriptional regulation; (2 allow for the fact that transcription factors cooperate as a functional complex in regulating gene expression, and (3 provide a model and a learning algorithm with manageable computational complexity. The objective of the present study is to propose and test a method that addresses these three issues. The model we employ is a mixture of factor analyzers, in which the latent variables correspond to different transcription factors, grouped into complexes or modules. We pursue inference in a Bayesian framework, using the Variational Bayesian Expectation Maximization (VBEM algorithm for approximate inference of the posterior distributions of the model parameters, and estimation of a lower bound on the marginal likelihood for model selection. We have evaluated the performance of the proposed method on three criteria: activity profile reconstruction, gene clustering, and network inference.
Expectation Maximization Segmentation
Bergman, Niclas
1998-01-01
This report reviews the Expectation Maximization EM algorithm and applies it to the data segmentation problem yielding the Expectation Maximization Segmentation EMS algorithm The EMS algorithm requires batch processing of the data and can be applied to mode switching or jumping linear dynamical state space models The EMS algorithm consists of an optimal fusion of fixed interval Kalman smoothing and discrete optimization. The next section gives a short introduction to the EM algorithm with som...
Practical Privacy For Expectation Maximization
Park, Mijung; Foulds, Jimmy; Chaudhuri, Kamalika; Welling, Max
2016-01-01
Expectation maximization (EM) is an iterative algorithm that computes maximum likelihood and maximum a posteriori estimates for models with unobserved variables. While widely used, the iterative nature of EM presents challenges for privacy-preserving estimation. Multiple iterations are required to obtain accurate parameter estimates, yet each iteration increases the amount of noise that must be added to achieve a reasonable degree of privacy. We propose a practical algorithm that overcomes th...
Spatial based Expectation Maximizing (EM)
Balafar M A
2011-01-01
Abstract Background Expectation maximizing (EM) is one of the common approaches for image segmentation. Methods an improvement of the EM algorithm is proposed and its effectiveness for MRI brain image segmentation is investigated. In order to improve EM performance, the proposed algorithms incorporates neighbourhood information into the clustering process. At first, average image is obtained as neighbourhood information and then it is incorporated in clustering process. Also, as an option, us...
Expectation maximization applied to GMTI convoy tracking
Koch, Wolfgang
2002-08-01
Collectively moving ground targets are typical of a military ground situation and have to be treated as separate aggregated entities. For a long-range ground surveillance application with airborne GMTI radar we inparticular address the task of track maintenance for ground moving convoys consisting of a small number of individual vehicles. In the proposed approach the identity of the individual vehicles within the convoy is no longer stressed. Their kinematical state vectors are rather treated as internal degrees of freedom characterizing the convoy, which is considered as a collective unit. In this context, the Expectation Maximization technique (EM), originally developed for incomplete data problems in statistical inference and first applied to tracking applications by STREIT et al. seems to be a promising approach. We suggest to embed the EM algorithm into a more traditional Bayesian tracking framework for dealing with false or unwanted sensor returns. The proposed distinction between external and internal data association conflicts (i.e. those among the convoy vehicles) should also enable the application of sequential track extraction techniques introduced by Van Keuk for aircraft formations, providing estimates of the number of the individual convoy vehicles involved. Even with sophisticated signal processing methods (STAP: Space-Time Adaptive Processing), ground moving vehicles can well be masked by the sensor specific clutter notch (Doppler blinding). This physical phenomenon results in interfering fading effects, which can well last over a longer series of sensor updates and therefore will seriously affect the track quality unless properly handled. Moreover, for ground moving convoys the phenomenon of Doppler blindness often superposes the effects induced by the finite resolution capability of the sensor. In many practical cases a separate modeling of resolution phenomena for convoy targets can therefore be omitted, provided the GMTI detection model is used
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
Tracking Articulated Bodies using Generalized Expectation Maximization
Fossati, Andrea; Arnaud, Elise; Horaud, Radu; Fua, Pascal
2008-01-01
A Generalized Expectation Maximization (GEM) algorithm is used to retrieve the pose of a person from a monocular video sequence shot with a moving camera. After embedding the set of possible poses in a low dimensional space using Principal Component Analysis, the configuration that gives the best match to the input image is held as estimate for the current frame. This match is computed iterating GEM to assign edge pixels to the correct body part and to find the body pose that maximizes the li...
Expectation-maximization for logistic regression
Scott, James G.; Sun, Liang
2013-01-01
We present a family of expectation-maximization (EM) algorithms for binary and negative-binomial logistic regression, drawing a sharp connection with the variational-Bayes algorithm of Jaakkola and Jordan (2000). Indeed, our results allow a version of this variational-Bayes approach to be re-interpreted as a true EM algorithm. We study several interesting features of the algorithm, and of this previously unrecognized connection with variational Bayes. We also generalize the approach to sparsi...
Expectation Maximization and Mixture Modeling Tutorial
Dinov, Ivo D.
2008-01-01
This technical report describes the statistical method of expectation maximization (EM) for parameter estimation. Several of 1D, 2D, 3D and n-D examples are presented in this document. Applications of the EM method are also demonstrated in the case of mixture modeling using interactive Java applets in 1D (e.g., curve fitting), 2D (e.g., point clustering and classification) and 3D (e.g., brain tissue classification).
Expectation-Maximization Approach to Boolean Factor Analysis
Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.
Piscataway: IEEE, 2011, s. 559-566. ISBN 978-1-4244-9636-5. [IJCNN 2011. International Joint Conference on Neural Networks. San Jose (US), 31.07.2011-05.08.2011] R&D Projects: GA ČR GAP202/10/0262; GA ČR GA205/09/1079; GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean factor analysis * bars problem * dendritic inhibition * expectation-maximization * neural network application * statistics Subject RIV: IN - Informatics, Computer Science
On the Expectation-Maximization Unfolding with Smoothing
Volobouev, Igor
2014-01-01
Error propagation formulae are derived for the expectation-maximization iterative unfolding algorithm regularized by a smoothing step. The effective number of parameters in the fit to the observed data is defined for unfolding procedures. Based upon this definition, the Akaike information criterion is proposed as a principle for choosing the smoothing parameters in an automatic, data-dependent manner. The performance and the frequentist coverage of the resulting method are investigated using simulated samples. A number of issues of general relevance to all unfolding techniques are discussed, including irreducible bias, uncertainty increase due to a data-dependent choice of regularization strength, and presentation of results.
Joint Iterative Carrier Synchronization and Signal Detection Employing Expectation Maximization
Zibar, Darko; de Carvalho, Luis Henrique Hecker; Estaran Tolosa, Jose Manuel;
2014-01-01
. The algorithm is tested in a mixed line rate optical transmission scenario employing dual polarization 448 Gb/s 16-QAM signal surrounded by eight on-off keying channels in a 50 GHz grid. It is shown that joint carrier synchronization and data detection are more robust towards optical transmitter......In this paper, joint estimation of carrier frequency, phase, signal means and noise variance, in a maximum likelihood sense, is performed iteratively by employing expectation maximization. The parameter estimation is soft decision driven and allows joint carrier synchronization and data detection...... impairments and nonlinear phase noise, compared to digital phase-locked loop (PLL) followed by hard decisions. Additionally, soft decision driven joint carrier synchronization and detection offers an improvement of 0.5 dB in terms of input power compared to hard decision digital PLL based carrier...
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Parallel Expectation-Maximization Algorithm for Large Databases
HUANG Hao; SONG Han-tao; LU Yu-chang
2006-01-01
A new parallel expectation-maximization (EM) algorithm is proposed for large databases. The purpose of the algorithm is to accelerate the operation of the EM algorithm. As a well-known algorithm for estimation in generic statistical problems, the EM algorithm has been widely used in many domains. But it often requires significant computational resources. So it is needed to develop more elaborate methods to adapt the databases to a large number of records or large dimensionality. The parallel EM algorithm is based on partial E-steps which has the standard convergence guarantee of EM. The algorithm utilizes fully the advantage of parallel computation. It was confirmed that the algorithm obtains about 2.6 speedups in contrast with the standard EM algorithm through its application to large databases. The running time will decrease near linearly when the number of processors increasing.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631
Generalized expectation-maximization segmentation of brain MR images
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
The Noisy Expectation-Maximization Algorithm for Multiplicative Noise Injection
Osoba, Osonde; Kosko, Bart
2016-03-01
We generalize the noisy expectation-maximization (NEM) algorithm to allow arbitrary modes of noise injection besides just adding noise to the data. The noise must still satisfy a NEM positivity condition. This generalization includes the important special case of multiplicative noise injection. A generalized NEM theorem shows that all measurable modes of injecting noise will speed the average convergence of the EM algorithm if the noise satisfies a generalized NEM positivity condition. This noise-benefit condition has a simple quadratic form for Gaussian and Cauchy mixture models in the case of multiplicative noise injection. Simulations show a multiplicative-noise EM speed-up of more than 27% in a simple Gaussian mixture model. Injecting blind noise only slowed convergence. A related theorem gives a sufficient condition for an average EM noise benefit for arbitrary modes of noise injection if the data model comes from the general exponential family of probability density functions. A final theorem shows that injected noise slows EM convergence on average if the NEM inequalities reverse and the noise satisfies a negativity condition.
Expectation maximization reconstruction for circular orbit cone-beam CT
Dong, Baoyu
2008-03-01
Cone-beam computed tomography (CBCT) is a technique for imaging cross-sections of an object using a series of X-ray measurements taken from different angles around the object. It has been widely applied in diagnostic medicine and industrial non-destructive testing. Traditional CT reconstructions are limited by many kinds of artifacts, and they give dissatisfactory image. To reduce image noise and artifacts, we propose a statistical iterative approach for cone-beam CT reconstruction. First the theory of maximum likelihood estimation is extended to X-ray scan, and an expectation-maximization (EM) formula is deduced for direct reconstruction of circular orbit cone-beam CT. Then the EM formula is implemented in cone-beam geometry for artifact reduction. EM algorithm is a feasible iterative method, which is based on the statistical properties of Poisson distribution. It can provide good quality reconstructions after a few iterations for cone-beam CT. In the end, experimental results with computer simulated data and real CT data are presented to verify our method is effective.
Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems
Zibar, Darko; Winther, Ole; Franceschi, Niccolo;
2012-01-01
We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth.......We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....
Expectation-Maximization for Learning Determinantal Point Processes
Gillenwater, Jennifer; Kulesza, Alex; Fox, Emily; Taskar, Ben
2014-01-01
A determinantal point process (DPP) is a probabilistic model of set diversity compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP to a given task, we would like to learn the entries of its kernel matrix by maximizing the log-likelihood of the available data. However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjectured to be NP-hard. Thus, previous work has instead focused on more restricted convex learning sett...
Boolean Factor Analysis by Expectation-Maximization Method
Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.
Heidelberg : Springer, 2013 - (Kudělka, M.; Pokorný, J.; Snášel, V.; Abraham, A.), s. 243-254 ISBN 978-3-642-31602-9. ISSN 2194-5357. - (Advances in Intelligent Systems and Computing. 179). [IHCI 2011. International Conference on Intelligent Human Computer Interaction /3./. Prague (CZ), 29.08.2011-31.08.2011] R&D Projects: GA ČR GAP202/10/0262; GA ČR GA205/09/1079 Grant ostatní: GA MŠk(CZ) ED1.1.00/02.0070 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural networks * hidden pattern search * Boolean factor analysis * generative model * information redundancy * exceptation-maximization Subject RIV: IN - Informatics, Computer Science
Expectation-maximization for Bayes-adaptive POMDPs
Erik P. Vargo; Randy Cogill
2015-01-01
Bayes-adaptive POMDPs (BAPOMDPs) are partially observable Markov decision problems in which uncertainty in the state-transition and observation-emission probabilities can be captured by a prior distribution over the model parameters. Existing approaches to solving BAPOMDPs rely on model and trajectory sampling to guide exploration and, because of the curse of dimensionality, do not scale well when the degree of model uncertainty is large. In this paper, we begin by presenting two expectation-...
Chen, Wei; Chang, Chunqi; Hu, Yong
2016-01-01
It is of great importance for intraoperative monitoring to accurately extract somatosensory evoked potentials (SEPs) and track its changes fast. Currently, multi-trial averaging is widely adopted for SEP signal extraction. However, because of the loss of variations related to SEP features across different trials, the estimated SEPs in such a way are not suitable for the purpose of real-time monitoring of every single trial of SEP. In order to handle this issue, a number of single-trial SEP extraction approaches have been developed in the literature, such as ARX and SOBI, but most of them have their performance limited due to not sufficient utilization of multi-trial and multi-condition structures of the signals. In this paper, a novel Bayesian model of SEP signals is proposed to make systemic use of multi-trial and multi-condition priors and other structural information in the signal by integrating both a cortical source propagation model and a SEP basis components model, and an Expectation Maximization (EM) algorithm is developed for single-trial SEP estimation under this model. Numerical simulations demonstrate that the developed method can provide reasonably good single-trial estimations of SEP as long as signal-to-noise ratio (SNR) of the measurements is no worse than -25 dB. The effectiveness of the proposed method is further verified by its application to real SEP measurements of a number of different subjects during spinal surgeries. It is observed that using the proposed approach the main SEP features (i.e., latencies) can be reliably estimated at single-trial basis, and thus the variation of latencies in different trials can be traced, which provides a solid support for surgical intraoperative monitoring. PMID:26742104
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
Estimating Rigid Transformation Between Two Range Maps Using Expectation Maximization Algorithm
Zeng, Shuqing
2012-01-01
We address the problem of estimating a rigid transformation between two point sets, which is a key module for target tracking system using Light Detection And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM) algorithm is presented whose complexity is O(N) with $N$ the number of scan points.
Anticipated utility and rational expectations as approximations of Bayesian decision making
Cogley, Timothy W.; Sargent, Thomas J.
2005-01-01
For a Markov decision problem in which unknown transition probabilities serve as hidden state variables, we study the quality of two approximations to the decision rule of a Bayesian who each period updates his subjective distribu- tion over the transition probabilities by Bayes’ law. The first is the usual ratio- nal expectations approximation that assumes that the decision maker knows the transition probabilities. The second approximation is a version of Kreps’ (1998) anticipated utility mo...
Bayesian Forecasting of US Growth using Basic Time Varying Parameter Models and Expectations Data
Basturk, Nalan; Ceyhan, Pinar; Dijk, Herman
2014-01-01
markdownabstract__Abstract__ Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential meth...
PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture
Rujirakul, Kanokmon; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA. PMID:24955405
PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture
Kanokmon Rujirakul
2014-01-01
Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Detection of Moroccan Coastal Upwelling in SST images using the Expectation-Maximization
Tamim, Ayoub; Minaoui, Khalid; Daoudi, Khalid; Atillah, Abderrahman; Aboutajdine, Driss
2014-01-01
International audience This paper proposes an unsupervised algorithm for automatic detection and segmentation of upwelling region in Moroccan Atlantic coast using the Sea Surface Temperature (SST) satellite images. This has been done by exploring the Expectation-Maximization algorithm. The good number of clus- ters that best reproduces the shape of upwelling areas is selected by using the two popular Davies-Bouldin and Dunn indices. Area opening technique is developed that is used to remov...
An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains
Dengfu Zhao; Zhiping Chen; Qihong Duan
2010-01-01
In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transi...
Convergence of the Monte Carlo expectation maximization for curved exponential families
Fort, Gersende; Moulines, Eric
2003-01-01
The Monte Carlo expectation maximization (MCEM) algorithm is a versatile tool for inference in incomplete data models, especially when used in combination with Markov chain Monte Carlo simulation methods. In this contribution, the almost-sure convergence of the MCEM algorithm is established. It is shown, using uniform versions of ergodic theorems for Markov chains, that MCEM converges under weak conditions on the simulation kernel. Practical illustrations are presented, using a hybrid random ...
Online Expectation Maximization based algorithms for inference in hidden Markov models
Le Corff, Sylvain; Fort, Gersende
2011-01-01
The Expectation Maximization (EM) algorithm is a versatile tool for model parameter estimation in latent data models. When processing large data sets or data stream however, EM becomes intractable since it requires the whole data set to be available at each iteration of the algorithm. In this contribution, a new generic online EM algorithm for model parameter inference in general Hidden Markov Model is proposed. This new algorithm updates the parameter estimate after a block of observations i...
Expectation-Maximization Method for EEG-Based Continuous Cursor Control
Yixiao Wang
2007-01-01
Full Text Available To develop effective learning algorithms for continuous prediction of cursor movement using EEG signals is a challenging research issue in brain-computer interface (BCI. In this paper, we propose a novel statistical approach based on expectation-maximization (EM method to learn the parameters of a classifier for EEG-based cursor control. To train a classifier for continuous prediction, trials in training data-set are first divided into segments. The difficulty is that the actual intention (label at each time interval (segment is unknown. To handle the uncertainty of the segment label, we treat the unknown labels as the hidden variables in the lower bound on the log posterior and maximize this lower bound via an EM-like algorithm. Experimental results have shown that the averaged accuracy of the proposed method is among the best.
Makram KRIT
2016-01-01
Full Text Available This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.
3D Ordered Subset Expectation Maximization (OSEM) Algorithm for a Compton camera
Kim, Soo Mee; Lee, Jae Sung; Kim, Joong Hyun; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lee, Mi No; Lee, Soo Jin [Paichai University, Daejeon (Korea, Republic of); Lee, Ju Hahn; Lee, Chun Sik [Chungang University, Seoul (Korea, Republic of); Kim, Chan Hyeong [Hanyang University, Seoul (Korea, Republic of)
2007-07-01
The Compton camera is an single-photon imaging device that employs an electronic collimation based on the relationship between energy transfer and Compton scattering angle of {gamma}-ray in the detector. In this study, the expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle was applied to the problem of image reconstruction for a Compton camera, which is known to be computationally challenging. This study also compared several methods of constructing subsets for the optimal performance of these algorithms.
Floberg, John M.; Struck, Aaron F; Peters, Brooke K; Jaskowiak, Christine J; Perlman, Scott B; Hall, Lance T
2012-01-01
There is a well known tradeoff between image noise and image sharpness that is dependent on the number of iterations performed in ordered subset expectation maximization (OSEM) reconstruction of PET data. We aim to evaluate the impact of this tradeoff on the sensitivity and specificity of 18F-FDG PET for the diagnosis of temporal lobe epilepsy. A retrospective blinded reader study was performed on two OSEM reconstructions, using either 2 or 5 iterations, of 32 18F-FDG PET studies acquired at ...
Zibar, Darko; Winther, Ole; Franceschi, Niccolo; Borkowski, Robert; Caballero Jambrina, Antonio; Arlunno, Valeria; Schmidt, Mikkel Nørgaard; Guerrero Gonzalez, Neil; Mao, Bangning; Ye, Yabin; Larsen, Knud J.; Tafur Monroy, Idelfonso
2012-01-01
In this paper, we show numerically and experimentally that expectation maximization (EM) algorithm is a powerful tool in combating system impairments such as fibre nonlinearities, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. The EM algorithm is an iterative algorithm...... that can be used to compensate for the impairments which have an imprint on a signal constellation, i.e. rotation and distortion of the constellation points. The EM is especially effective for combating non-linear phase noise (NLPN). It is because NLPN severely distorts the signal constellation and...
Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET
Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee
2016-04-01
Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains
Qihong Duan
2010-01-01
Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.
Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).
String-averaging expectation-maximization for maximum likelihood estimation in emission tomography
We study the maximum likelihood model in emission tomography and propose a new family of algorithms for its solution, called string-averaging expectation-maximization (SAEM). In the string-averaging algorithmic regime, the index set of all underlying equations is split into subsets, called ‘strings’, and the algorithm separately proceeds along each string, possibly in parallel. Then, the end-points of all strings are averaged to form the next iterate. SAEM algorithms with several strings present better practical merits than the classical row-action maximum-likelihood algorithm. We present numerical experiments showing the effectiveness of the algorithmic scheme, using data of image reconstruction problems. Performance is evaluated from the computational cost and reconstruction quality viewpoints. A complete convergence theory is also provided. (paper)
Zhang Jin [Department of Biomedical Engineering, Illinois Institute of Technology, 10 West 32nd Street, E1-116, Chicago, IL 60616 (United States); Shi Daxin [Department of Biomedical Engineering, Illinois Institute of Technology, 10 West 32nd Street, E1-116, Chicago, IL 60616 (United States); Anastasio, Mark A [Department of Biomedical Engineering, Illinois Institute of Technology, 10 West 32nd Street, E1-116, Chicago, IL 60616 (United States); Sillanpaa, Jussi [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, NY 10021 (United States); Chang Jenghwa [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, NY 10021 (United States)
2005-11-07
We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)
Angelis, Georgios I.; Matthews, Julian C.; Markiewicz, Pawel J.; Kotasidis, Fotis A. [Manchester Univ. (United Kingdom). Dept. of Cancer and Enabling Sciences; Lionheart, William R. [Manchester Univ. (United Kingdom). School of Mathematics; Reader, Andrew J. [McGill Univ., Montreal, QC (Canada). Brain Imaging Centre
2011-07-01
Recent studies have demonstrated the benefits of a resolution model within the reconstruction algorithm in an attempt to account for those effects that degrade the resolution of an image. However, these algorithms usually suffer from slower convergence rates due to the additional need to solve an image resolution deconvolution problem. In this work we investigate a newly proposed algorithm, which decouples the tomographic and image resolution problems within an image based expectation maximization (EM) framework. Results showed that convergence can be accelerated by interleaving multiple iterations of an image based EM algorithm solving the resolution model problem with EM iterations solving the tomographic problem. Minor differences are observed using the proposed nested algorithm compared to the single iteration normally performed when optimal number of iterations are performed for each algorithm. However using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer iterations. This may be of particular benefit for slowly converging portions of the image. (orig.)
Clustering performance comparison using K-means and expectation maximization algorithms
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-01-01
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610
Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp [Bank of Tokyo-Mitsubishi UFJ, Ltd., Corporate Risk Management Division (Japan); Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp [Osaka University, Division of Mathematical Science for Social Systems, Graduate School of Engineering Science (Japan); Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it [Universita di Padova, Dipartimento di Matematica Pura ed Applicata (Italy)
2013-02-15
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).
Garbarino, Sara; Massone, Anna Maria; Sannino, Alessia; Boselli, Antonella; Wang, Xuan; Spinelli, Nicola; Piana, Michele
2016-01-01
We consider the problem of retrieving the aerosol extinction coefficient from Raman lidar measurements. This is an ill--posed inverse problem that needs regularization, and we propose to use the Expectation--Maximization (EM) algorithm to provide stable solutions. Indeed, EM is an iterative algorithm that imposes a positivity constraint on the solution, and provides regularization if iterations are stopped early enough. We describe the algorithm and propose a stopping criterion inspired by a statistical principle. We then discuss its properties concerning the spatial resolution. Finally, we validate the proposed approach by using both synthetic data and experimental measurements; we compare the reconstructions obtained by EM with those obtained by the Tikhonov method, by the Levenberg-Marquardt method, as well as those obtained by combining data smoothing and numerical derivation.
Cherchi, Elisabetta; Guevara, Cristian
2012-01-01
a series of Monte Carlo experiments, evidence suggested four main conclusions: (a) efficiency increased when the true variance-covariance matrix became diagonal, (b) EM was more robust to the curse of dimensionality in regard to efficiency and estimation time, (c) EM did not recover the true scale...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time and that...... EM has more difficulty in recovering the true scale of the coefficients. In this paper, the analysis is extended from cross-sectional data to the less volatile case of panel data to explore the effect on the relative performance of the methods with several realizations of the random coefficients. In...
Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H.; Medeiros, Felipe A.; Zangwill, Linda M.; Weinreb, Robert N.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Bowd, Christopher
2016-01-01
Purpose To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM–progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. Methods GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Results Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. Conclusions GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Translational Relevance Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning. PMID:27152250
Langmead, Christopher James [Dartmouth Computer Science Department (United States); Donald, Bruce Randall [Dartmouth Center for Structural Biology and Computational Chemistry (United States)], E-mail: brd@cs.dartmouth.edu
2004-06-15
We report an automated procedure for high-throughput NMR resonance assignment for a protein of known structure, or of an homologous structure. Our algorithm performs Nuclear Vector Replacement (NVR) by Expectation/Maximization (EM) to compute assignments. NVR correlates experimentally-measured NH residual dipolar couplings (RDCs) and chemical shifts to a given a priori whole-protein 3D structural model. The algorithm requires only uniform {sup 15}N-labelling of the protein, and processes unassigned H{sup N}-{sup 15}N HSQC spectra, H{sup N}-{sup 15}N RDCs, and sparse H{sup N}-H{sup N} NOE's (d{sub NN}s). NVR runs in minutes and efficiently assigns the (H{sup N},{sup 15}N) backbone resonances as well as the sparse d{sub NN}s from the 3D {sup 15}N-NOESY spectrum, in O(n{sup 3}) time. The algorithm is demonstrated on NMR data from a 76-residue protein, human ubiquitin, matched to four structures, including one mutant (homolog), determined either by X-ray crystallography or by different NMR experiments (without RDCs). NVR achieves an average assignment accuracy of over 99%. We further demonstrate the feasibility of our algorithm for different and larger proteins, using different combinations of real and simulated NMR data for hen lysozyme (129 residues) and streptococcal protein G (56 residues), matched to a variety of 3D structural models. Abbreviations: NMR, nuclear magnetic resonance; NVR, nuclear vector replacement; RDC, residual dipolar coupling; 3D, three-dimensional; HSQC, heteronuclear single-quantum coherence; H{sup N}, amide proton; NOE, nuclear Overhauser effect; NOESY, nuclear Overhauser effect spectroscopy; d{sub NN}, nuclear Overhauser effect between two amide protons; MR, molecular replacement; SAR, structure activity relation; DOF, degrees of freedom; nt., nucleotides; SPG, Streptococcal protein G; SO(3), special orthogonal (rotation) group in 3D; EM, Expectation/Maximization; SVD, singular value decomposition.
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)
Takahashi, Yasuyuki; Murase, Kenya [Osaka Medical Coll., Takatsuki (Japan). Graduate School; Higashino, Hiroshi; Sogabe, Ichiro; Sakamoto, Kana
2001-12-01
The quality of images reconstructed by means of the maximum likelihood-expectation maximization (ML-EM) and ordered subset (OS)-EM algorithms, was examined with parameters such as the number of iterations and subsets, then compared with the quality of images reconstructed by the filtered back projection method. Phantoms showing signals inside signals, which mimicked single-photon emission computed tomography (SPECT) images of cerebral blood flow and myocardial perfusion, and phantoms showing signals around the signals obtained by SPECT of bone and tumor were used for experiments. To determine signals for recognition, SPECT images in which the signals could be appropriately recognized with a combination of fewer iterations and subsets of different sizes and densities were evaluated by receiver operating characteristic (ROC) analysis. The results of ROC analysis were applied to myocardial phantom experiments and scintigraphy of myocardial perfusion. Taking the image processing time into consideration, good SPECT images were obtained by OS-EM at iteration No. 10 and subset 5. This study will be helpful for selection of parameters such as the number of iterations and subsets when using the ML-EM or OS-EM algorithms. (author)
Liu, Haiguang; Spence, John C.H.
2014-01-01
Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these ‘stills’. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated. PMID:25485120
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
Bhaduri, Kanishka; Srivastava, Ashok N.
2009-01-01
This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.
Kim, Dae Won [Dankook University, Cheonan (Korea, Republic of)
2005-02-15
Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances
Speckle reduction for medical ultrasound images with an expectation-maximization framework
HOU Tao; WANG Yuanyuan; GUO Yi
2011-01-01
In view of inherent speckle noise in medical images, a speckle reduction method was proposed based on an expectation-maximization （EM） framework. First, the real component of the in-phase/quadrature （I/Q） ultrasound image is extracted. Then, it is used to blindly estimate the point spread function （PSF） of the imaging system. Finally, based on the EM framework, an iterative algorithm alternating between the Wiener Filter and the anisotropic diffusion （AD） is exploited to produce despeckled images. The comparison experiment is carried out on both simulated and in vivo ultrasound images. It is shown that, with respect to the I/Q image, the proposed method averagely improves the speckle-signal-to-noise ratio （S-SNR） and the edge preservation index （β） of images by the factor of 1.94 and 7.52. Meanwhile, it averagely reduces the normalized mean-squared error （NMSE） by the factor of 3.95. The simulation and in vivo results indicates that the proposed method has a better overall performance than exited ones.
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.
Haiguang Liu
2014-11-01
Full Text Available Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these `stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity, the method is validated.
Stanley, B.J.; Guiochon, G. [Tennessee Univ., Knoxville, TN (United States). Dept. of Chemistry]|[Oak Ridge National Lab., TN (United States)
1993-08-01
The expectation-maximization (EM) method of parameter estimation is used to calculate adsorption energy distributions of molecular probes from their adsorption isotherms. EM does not require prior knowledge of the distribution function or the isotherm, requires no smoothing of the isotherm data, and converges with high stability towards the maximum-likelihood estimate. The method is therefore robust and accurate at high iteration numbers. The EM algorithm is tested with simulated energy distributions corresponding to unimodal Gaussian, bimodal Gaussian, Poisson distributions, and the distributions resulting from Misra isotherms. Theoretical isotherms are generated from these distributions using the Langmuir model, and then chromatographic band profiles are computed using the ideal model of chromatography. Noise is then introduced in the theoretical band profiles comparable to those observed experimentally. The isotherm is then calculated using the elution-by-characteristic points method. The energy distribution given by the EM method is compared to the original one. Results are contrasted to those obtained with the House and Jaycock algorithm HILDA, and shown to be superior in terms of robustness, accuracy, and information theory. The effect of undersampling of the high-pressure/low-energy region of the adsorption is reported and discussed for the EM algorithm, as well as the effect of signal-to-noise ratio on the degree of heterogeneity that may be estimated experimentally.
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically
Development of regularized expectation maximization algorithms for fan-beam SPECT data
SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions
Association studies with imputed variants using expectation-maximization likelihood-ratio tests.
Kuan-Chieh Huang
Full Text Available Genotype imputation has become standard practice in modern genetic studies. As sequencing-based reference panels continue to grow, increasingly more markers are being well or better imputed but at the same time, even more markers with relatively low minor allele frequency are being imputed with low imputation quality. Here, we propose new methods that incorporate imputation uncertainty for downstream association analysis, with improved power and/or computational efficiency. We consider two scenarios: I when posterior probabilities of all potential genotypes are estimated; and II when only the one-dimensional summary statistic, imputed dosage, is available. For scenario I, we have developed an expectation-maximization likelihood-ratio test for association based on posterior probabilities. When only imputed dosages are available (scenario II, we first sample the genotype probabilities from its posterior distribution given the dosages, and then apply the EM-LRT on the sampled probabilities. Our simulations show that type I error of the proposed EM-LRT methods under both scenarios are protected. Compared with existing methods, EM-LRT-Prob (for scenario I offers optimal statistical power across a wide spectrum of MAF and imputation quality. EM-LRT-Dose (for scenario II achieves a similar level of statistical power as EM-LRT-Prob and, outperforms the standard Dosage method, especially for markers with relatively low MAF or imputation quality. Applications to two real data sets, the Cebu Longitudinal Health and Nutrition Survey study and the Women's Health Initiative Study, provide further support to the validity and efficiency of our proposed methods.
Development of regularized expectation maximization algorithms for fan-beam SPECT data
Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lee, Soo Jin [Paichai University, Daejeon (Korea, Republic of); Kim, Kyeong Min [Korea Institute of Radiology and Medical Sciences, Seoul (Korea, Republic of)
2005-10-15
SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.
Stochastic modeling is a challenging task for low-cost sensors whose errors can have complex spectral structures. This makes the tuning process of the INS/GNSS Kalman filter often sensitive and difficult. For example, first-order Gauss–Markov processes are very often used in inertial sensor models. But the estimation of their parameters is a non-trivial task if the error structure is mixed with other types of noises. Such an estimation is often attempted by computing and analyzing Allan variance plots. This contribution demonstrates solving situations when the estimation of error parameters by graphical interpretation is rather difficult. The novel strategy performs direct estimation of these parameters by means of the expectation-maximization (EM) algorithm. The algorithm results are first analyzed with a critical and practical point of view using simulations with typically encountered error signals. These simulations show that the EM algorithm seems to perform better than the Allan variance and offers a procedure to estimate first-order Gauss–Markov processes mixed with other types of noises. At the same time, the conducted tests revealed limits of this approach that are related to the convergence and stability issues. Suggestions are given to circumvent or mitigate these problems when complexity of error structure is 'reasonable'. This work also highlights the fact that the suggested approach via EM algorithm and the Allan variance may not be able to estimate the parameters of complex error models reasonably well and shows the need for new estimation procedures to be developed in this context. Finally, an empirical scenario is presented to support the former findings. There, the positive effect of using the more sophisticated EM-based error modeling on a filtered trajectory is highlighted
Kim, Soo Mee [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of); Lee, Jae Sung; Lee, Soo Jin [Paichai University, Daejeon (Korea, Republic of)
2007-07-01
In this study, the expectation maximization (EM) and ordered subset EM (OSEM) reconstruction algorithms have been applied to the Compton projection data. For OSEM, we propose the several methods for constructing subsets and compare the impact of the each method on the reconstructed images to choose the proper subset construction method. A Compton camera was consisted of three pairs of scatterer and absorber detectors which were parallel to each other. The detector pairs were positioned along the x-, y-, and z-axes at the radial offset of 10 em. The 3-directional projection data of 5-cylinder software phantom (64x64x64 array, 1.56mm) was obtained from a Compton projector. In this study, we used the iterative reconstruction algorithms such as EM and OSEM. For application of OSEM algorithm to the Compton camera, we proposed three strategies for constructing the exclusive subsets; scattering angle-based subsets (OSEM-SA), detector-position-based subsets (OSEM-DP), and both scattering angle- and detector-position-based subsets (OSEM-AP). The OSEM with 16, 64 and 128 subsets were performed through 16, 4, and 2 iterations, respectively. The OSEM with 16 subsets and 4 iterations was equivalent to the EM with 64 iterations, but the computation time was approximately reduced by 14 times. All the three schemes for choosing the subsets in OSEM algorithm yielded similar results in computation time, but the percent error for OSEM-SA was slightly larger than others. No significant change of percent error as the subset number increased up to 128 was observed. The simulation results showed that the EM reconstruction algorithm is applicable to the Compton camera data with sufficient counting statistics. OSEM significantly improved the computational efficiency and maintained the image quality of the standard EM reconstruction. The OSEM algorithm which included subsets on the detection positions (OSEM-DP and OSEM-AP) provided slightly better results than OSEM-SA.
In this study, the expectation maximization (EM) and ordered subset EM (OSEM) reconstruction algorithms have been applied to the Compton projection data. For OSEM, we propose the several methods for constructing subsets and compare the impact of the each method on the reconstructed images to choose the proper subset construction method. A Compton camera was consisted of three pairs of scatterer and absorber detectors which were parallel to each other. The detector pairs were positioned along the x-, y-, and z-axes at the radial offset of 10 em. The 3-directional projection data of 5-cylinder software phantom (64x64x64 array, 1.56mm) was obtained from a Compton projector. In this study, we used the iterative reconstruction algorithms such as EM and OSEM. For application of OSEM algorithm to the Compton camera, we proposed three strategies for constructing the exclusive subsets; scattering angle-based subsets (OSEM-SA), detector-position-based subsets (OSEM-DP), and both scattering angle- and detector-position-based subsets (OSEM-AP). The OSEM with 16, 64 and 128 subsets were performed through 16, 4, and 2 iterations, respectively. The OSEM with 16 subsets and 4 iterations was equivalent to the EM with 64 iterations, but the computation time was approximately reduced by 14 times. All the three schemes for choosing the subsets in OSEM algorithm yielded similar results in computation time, but the percent error for OSEM-SA was slightly larger than others. No significant change of percent error as the subset number increased up to 128 was observed. The simulation results showed that the EM reconstruction algorithm is applicable to the Compton camera data with sufficient counting statistics. OSEM significantly improved the computational efficiency and maintained the image quality of the standard EM reconstruction. The OSEM algorithm which included subsets on the detection positions (OSEM-DP and OSEM-AP) provided slightly better results than OSEM-SA
Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.
2010-06-01
Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance
A Bayesian Analysis of GPS Guidance System in Precision Agriculture: The Role of Expectations
Khanal, Aditya R; Mishra, Ashok K.; Lambert, Dayton M.; Paudel, Krishna P.
2013-01-01
Farmer’s post adoption responses about technology are important in continuation and diffusion of a technology in precision agriculture. We studied farmer’s frequency of application decisions of GPS guidance system, after adoption. Using a Cotton grower’s precision farming survey in the U.S. and Bayesian approaches, our study suggests that ‘meeting expectation’ plays an important positive role. Farmer’s income level, farm size, and farming occupation are other important factors in modeling GPS...
Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.
Taghipour, Sharareh, E-mail: sharareh@mie.utoronto.ca [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ont., M5S 3G8 (Canada); Banjevic, Dragan [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ont., M5S 3G8 (Canada)
2011-10-15
Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.
Chang Luo
2015-01-01
Full Text Available The many-objective optimization performance of the Kriging-surrogate-based evolutionary algorithm (EA, which maximizes expected hypervolume improvement (EHVI for updating the Kriging model, is investigated and compared with those using expected improvement (EI and estimation (EST updating criteria in this paper. Numerical experiments are conducted in 3- to 15-objective DTLZ1-7 problems. In the experiments, an exact hypervolume calculating algorithm is used for the problems with less than six objectives. On the other hand, an approximate hypervolume calculating algorithm based on Monte Carlo sampling is adopted for the problems with more objectives. The results indicate that, in the nonconstrained case, EHVI is a highly competitive updating criterion for the Kriging model and EA based many-objective optimization, especially when the test problem is complex and the number of objectives or design variables is large.
Endrizzi, M., E-mail: m.endrizzi@ucl.ac.uk [Dipartimento di Fisica, Università di Siena, Via Roma 56, 53100 Siena (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Delogu, P. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dipartimento di Fisica “E. Fermi”, Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Oliva, P. [Dipartimento di Chimica e Farmacia, Università di Sassari, via Vienna 2, 07100 Sassari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, s.p. per Monserrato-Sestu Km 0.700, 09042 Monserrato (Italy)
2014-12-01
An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available.
An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available
Liu, Mengyuan; Kitsch, Averi; Miller, Steven; Chau, Vann; Poskitt, Kenneth; Rousseau, Francois; Shaw, Dennis; Studholme, Colin
2016-02-15
Accurate automated tissue segmentation of premature neonatal magnetic resonance images is a crucial task for quantification of brain injury and its impact on early postnatal growth and later cognitive development. In such studies it is common for scans to be acquired shortly after birth or later during the hospital stay and therefore occur at arbitrary gestational ages during a period of rapid developmental change. It is important to be able to segment any of these scans with comparable accuracy. Previous work on brain tissue segmentation in premature neonates has focused on segmentation at specific ages. Here we look at solving the more general problem using adaptations of age specific atlas based methods and evaluate this using a unique manually traced database of high resolution images spanning 20 gestational weeks of development. We examine the complimentary strengths of age specific atlas-based Expectation-Maximization approaches and patch-based methods for this problem and explore the development of two new hybrid techniques, patch-based augmentation of Expectation-Maximization with weighted fusion and a spatial variability constrained patch search. The former approach seeks to combine the advantages of both atlas- and patch-based methods by learning from the performance of the two techniques across the brain anatomy at different developmental ages, while the latter technique aims to use anatomical variability maps learnt from atlas training data to locally constrain the patch-based search range. The proposed approaches were evaluated using leave-one-out cross-validation. Compared with the conventional age specific atlas-based segmentation and direct patch based segmentation, both new approaches demonstrate improved accuracy in the automated labeling of cortical gray matter, white matter, ventricles and sulcal cortical-spinal fluid regions, while maintaining comparable results in deep gray matter. PMID:26702777
Long, Quan
2013-06-01
Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.
Single photon emission computed tomography imaging suffers from poor spatial resolution and high statistical noise. Consequently, the contrast of small structures is reduced, the visual detection of defects is limited and precise quantification is difficult. To improve the contrast, it is possible to include the spatially variant point spread function of the detection system into the iterative reconstruction algorithm. This kind of method is well known to be effective, but time consuming. We have developed a faster method to account for the spatial resolution loss in three dimensions, based on a postreconstruction restoration method. The method uses two steps. First, a noncorrected iterative ordered subsets expectation maximization (OSEM) reconstruction is performed and, in the second step, a three-dimensional (3D) iterative maximum likelihood expectation maximization (ML-EM) a posteriori spatial restoration of the reconstructed volume is done. In this paper, we compare to the standard OSEM-3D method, in three studies (two in simulation and one from experimental data). In the two first studies, contrast, noise, and visual detection of defects are studied. In the third study, a quantitative analysis is performed from data obtained with an anthropomorphic striatal phantom filled with 123-I. From the simulations, we demonstrate that contrast as a function of noise and lesion detectability are very similar for both OSEM-3D and OSEM-R methods. In the experimental study, we obtained very similar values of activity-quantification ratios for different regions in the brain. The advantage of OSEM-R compared to OSEM-3D is a substantial gain of processing time. This gain depends on several factors. In a typical situation, for a 128x128 acquisition of 120 projections, OSEM-R is 13 or 25 times faster than OSEM-3D, depending on the calculation method used in the iterative restoration. In this paper, the OSEM-R method is tested with the approximation of depth independent
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-01
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect. PMID:26758232
Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.
2016-02-01
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Lu, Chia-Feng; Guo, Wan-Yuo; Chang, Feng-Chi; Huang, Shang-Ran; Chou, Yen-Chun; Wu, Yu-Te
2013-01-01
Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified. PMID:23894386
Chue-Poh TAN
2008-02-01
Full Text Available This paper presents the application of multi dimensional feature reduction ofConsistency Subset Evaluator (CSE and Principal Component Analysis (PCAand Unsupervised Expectation Maximization (UEM classifier for imagingsurveillance system. Recently, research in image processing has raised muchinterest in the security surveillance systems community. Weapon detection is oneof the greatest challenges facing by the community recently. In order toovercome this issue, application of the UEM classifier is performed to focus onthe need of detecting dangerous weapons. However, CSE and PCA are used toexplore the usefulness of each feature and reduce the multi dimensional featuresto simplified features with no underlying hidden structure. In this paper, we takeadvantage of the simplified features and classifier to categorize images objectwith the hope to detect dangerous weapons effectively. In order to validate theeffectiveness of the UEM classifier, several classifiers are used to compare theoverall accuracy of the system with the compliment from the features reduction ofCSE and PCA. These unsupervised classifiers include Farthest First, DensitybasedClustering and k-Means methods. The final outcome of this researchclearly indicates that UEM has the ability in improving the classification accuracyusing the extracted features from the multi-dimensional feature reduction of CSE.Besides, it is also shown that PCA is able to speed-up the computational timewith the reduced dimensionality of the features compromising the slight decreaseof accuracy.
Lu, Chia-Feng; Guo, Wan-Yuo; Chang, Feng-Chi; Huang, Shang-Ran; Chou, Yen-Chun; Wu, Yu-Te
2013-01-01
Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified. PMID:23894386
Chia-Feng Lu
Full Text Available Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified.
Matsumoto, Keiichi; Fujita, Toru; Oogari, Koji [Kyoto Univ. (Japan). Hospital
2002-05-01
Maximum likelihood expectation maximization (ML-EM) image quality is sensitive to the number of iterations, because a large number of iterations leads to images with checkerboard noise. The use of median filtering in the reconstruction process allows both noise reduction and edge preservation. We examined the value of median filtering after reconstruction with ML-EM by comparing filtered back projection (FBP) with a ramp filter or ML-EM without filtering. SPECT images were obtained with a dual-head gamma camera. The acquisition time was changed from 10 to 200 (seconds/frame) to examine the effect of the count statistics on the quality of the reconstructed images. First, images were reconstructed with ML-EM by changing the number of iterations from 1 to 150 in each study. Additionally, median filtering was applied following reconstruction with ML-EM. The quality of the reconstructed images was evaluated in terms of normalized mean square error (NMSE) values and two-dimensional power spectrum analysis. Median filtering after reconstruction by the ML-EM method provided stable NMSE values even when the number of iterations was increased. The signal element of the image was close to the reference image for any repetition number of iterations. Median filtering after reconstruction with ML-EM was useful in reducing noise, with a similar resolution achieved by reconstruction with FBP and a ramp filter. Especially in images with poor count statistics, median filtering after reconstruction with ML-EM is effective as a simple, widely available method. (author)
Jeon, Tae Joo; Bong, Jung Kyun; Kim, Hee Joung; Kim, Myung Jin; Lee, Jong Doo [Yonsei University College of Medicine, Seoul (Korea, Republic of)
2002-12-01
RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile. OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. There experts reviewed theses images without any information. According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.
Ye Ping
2005-12-01
Full Text Available Abstract Background Synthetic lethality experiments identify pairs of genes with complementary function. More direct functional associations (for example greater probability of membership in a single protein complex may be inferred between genes that share synthetic lethal interaction partners than genes that are directly synthetic lethal. Probabilistic algorithms that identify gene modules based on motif discovery are highly appropriate for the analysis of synthetic lethal genetic interaction data and have great potential in integrative analysis of heterogeneous datasets. Results We have developed Genetic Interaction Motif Finding (GIMF, an algorithm for unsupervised motif discovery from synthetic lethal interaction data. Interaction motifs are characterized by position weight matrices and optimized through expectation maximization. Given a seed gene, GIMF performs a nonlinear transform on the input genetic interaction data and automatically assigns genes to the motif or non-motif category. We demonstrate the capacity to extract known and novel pathways for Saccharomyces cerevisiae (budding yeast. Annotations suggested for several uncharacterized genes are supported by recent experimental evidence. GIMF is efficient in computation, requires no training and automatically down-weights promiscuous genes with high degrees. Conclusion GIMF effectively identifies pathways from synthetic lethality data with several unique features. It is mostly suitable for building gene modules around seed genes. Optimal choice of one single model parameter allows construction of gene networks with different levels of confidence. The impact of hub genes the generic probabilistic framework of GIMF may be used to group other types of biological entities such as proteins based on stochastic motifs. Analysis of the strongest motifs discovered by the algorithm indicates that synthetic lethal interactions are depleted between genes within a motif, suggesting that synthetic
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2014-10-01
The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). However, EM faces a local convergence problem in HMM estimation. This paper attempts to overcome this problem of EM and proposes hybrid metaheuristic approaches to EM for HMM. In our earlier research, a hybrid of a constraint-based evolutionary learning approach to EM (CEL-EM) improved HMM estimation. In this paper, we propose a hybrid simulated annealing stochastic version of EM (SASEM) that combines simulated annealing (SA) with EM. The novelty of our approach is that we develop a mathematical reformulation of HMM estimation by introducing a stochastic step between the EM steps and combine SA with EM to provide better control over the acceptance of stochastic and EM steps for better HMM estimation. We also extend our earlier work and propose a second hybrid which is a combination of an EA and the proposed SASEM, (EA-SASEM). The proposed EA-SASEM uses the best constraint-based EA strategies from CEL-EM and stochastic reformulation of HMM. The complementary properties of EA and SA and stochastic reformulation of HMM of SASEM provide EA-SASEM with sufficient potential to find better estimation for HMM. To the best of our knowledge, this type of hybridization and mathematical reformulation have not been explored in the context of EM and HMM training. The proposed approaches have been evaluated through comprehensive experiments to justify their effectiveness in signal modeling using the speech corpus: TIMIT. Experimental results show that proposed approaches obtain higher recognition accuracies than the EM algorithm and CEL-EM as well. PMID:24686310
Lee, Youngrok [Iowa State Univ., Ames, IA (United States)
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.
Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory
Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were
Hufnagel, Heike [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany); Pennec, Xavier; Ayache, Nicholas [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); Ehrhardt, Jan; Handels, Heinz [University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany)
2008-03-15
Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of 'generalization ability' and &apos
2014-01-01
Background Recovering individual genomes from metagenomic datasets allows access to uncultivated microbial populations that may have important roles in natural and engineered ecosystems. Understanding the roles of these uncultivated populations has broad application in ecology, evolution, biotechnology and medicine. Accurate binning of assembled metagenomic sequences is an essential step in recovering the genomes and understanding microbial functions. Results We have developed a binning algorithm, MaxBin, which automates the binning of assembled metagenomic scaffolds using an expectation-maximization algorithm after the assembly of metagenomic sequencing reads. Binning of simulated metagenomic datasets demonstrated that MaxBin had high levels of accuracy in binning microbial genomes. MaxBin was used to recover genomes from metagenomic data obtained through the Human Microbiome Project, which demonstrated its ability to recover genomes from real metagenomic datasets with variable sequencing coverages. Application of MaxBin to metagenomes obtained from microbial consortia adapted to grow on cellulose allowed genomic analysis of new, uncultivated, cellulolytic bacterial populations, including an abundant myxobacterial population distantly related to Sorangium cellulosum that possessed a much smaller genome (5 MB versus 13 to 14 MB) but has a more extensive set of genes for biomass deconstruction. For the cellulolytic consortia, the MaxBin results were compared to binning using emergent self-organizing maps (ESOMs) and differential coverage binning, demonstrating that it performed comparably to these methods but had distinct advantages in automation, resolution of related genomes and sensitivity. Conclusions The automatic binning software that we developed successfully classifies assembled sequences in metagenomic datasets into recovered individual genomes. The isolation of dozens of species in cellulolytic microbial consortia, including a novel species of
Hove, Jens D; Rasmussen, Rune; Freiberg, Jacob; Holm, Søren; Kelbaek, Henning; Kofoed, Klaus
2008-01-01
BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron e...
Martin, Matthew J; Smelser, Amanda M; Holzwarth, George
2016-04-01
Many organelles and vesicles in live cells move in a start-stop manner when observed for ~10 s by optical microscopy. Changes in velocity and directional persistence of such particles are a potentially rich source of insight into the mechanisms leading to the start and stop states. Unbiased assessment of the most probable number of states, the properties of each state, and the most probable state for the particle at each moment can be accomplished by variational Bayesian methods combined with a hidden Markov model and a Gaussian mixture model. Our track analysis method, "vbTRACK", applied this combination of methods to particle velocity v or changes in the direction of travel evaluated from simulated tracks and from tracks of peroxisomes in live cells. When tested with numerical data, vbTRACK reliably determined the number of states, the mean and variance of the velocity or the direction of travel for each state, and the most probable state during each frame. When applied to the tracks of peroxisomes in live cells, some tracks separated into two states, one with high velocity and directionality, the other approximately Brownian. Other tracks of particles in live cells separated into several diffusive states with distinct diffusion constants. PMID:26538332
A Bayesian Approach to Interactive Retrieval
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
The maximal number of elementary particles which could be expected to be found within a modestly extended energy scale of the standard model was found using various methods to be N = 69. In particular using E-infinity theory the present Author found the exact transfinite expectation value to be =α-baro/2≅69 where α-baro=137.082039325 is the exact inverse fine structure constant. In the present work we show among other things how to derive the exact integer value 69 from the exceptional Lie symmetry groups hierarchy. It is found that the relevant number is given by dim H = 69 where H is the maximal compact subspace of E7(-5) so that N = dim H = 69 while dim E7 = 133
Floberg, J M; Holden, J.E.
2013-01-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines 4-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without intro...
Shahira M. Habashy
2012-03-01
Full Text Available The identification of RNA secondary structures has been among the most exciting recent developments in biology and medical science. Prediction of RNA secondary structure is a fundamental problem in computational structural biology. For several decades, free energy minimization has been the most popular method for prediction from a single sequence. It is based on a set of empirical free energy change parameters derived from experiments using a nearest-neighbor model. Accurate prediction of RNA secondary structure from the base sequence is an unsolved computational challenge. The accuracy of predictions made by free energy minimization is limited by the quality of the energy parameters in the underlying free energy model. More recently, stochastic context-free grammars (SCFGs have emerged as an alternative probabilistic methodology for modeling RNA structure. Unlike physics-based methods, which rely on thousands of experimentally -measured thermodynamic parameters, SCFGs use fully-automated statistical learning algorithms to derive model parameters. This paper proposes a new algorithm that computes base pairing pattern for RNA molecule. Complex internal structures in RNA are fully taken into account. It supports the calculation of stochastic context-free grammars (SCFGs, and equilibrium concentrations of duplex structures. This new algorithm is compared with dynamic programming benchmark mfold and algorithms (Tfold, and MaxExpect. The results showed that the proposed algorithm achieved better performance with respect to sensitivity and positive predictive value.
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456
Jeon, Tae Joo; Lee, Jong Doo; Kim, Hee Joung; Kim, Myung Jin; Yoo, Hyung Sik [College of Medicine, Yonsei Univ., Seoul (Korea, Republic of)
2000-07-01
The aims of this study is to validate the usefulness of ordered subset expectation maximization (OSEM) comparing with filtered back projection in terms of diagnostic ability for hepatocellular carcinoma (HCC). The data of fifty seven patients with HCC and 62 patients with normal liver was reconstructed using both OSEM and FBP. Mean age of the patients group was 54.4{+-}1.5 year. All patient underwent whole body and liver scan after the injection of 10 mCi of (F-18)FDG using dedicated whole body PET camera (GE, Advance). Interpretation of PET images were performed by 3 observers with random exposure of normal and diseased cases. Receiver operator characteristic (ROC) study was used for the validation of results. The area of ROC curve. Az was represented as below and this results revealed statistical differences (p<0.05). In PET studies of patients with HCC, OSEM showed better results that those of conventional FBP in terms of lesion detectability.
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. (paper)
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment
Jeon, Tae Joo; Kim, Hee Joung; Bong, Jung Kyun; Lee, Jong Doo [College of Medicine, Yonsei Univ., Seoul (Korea, Republic of)
2000-07-01
Odered subset expectation maximization (OSEM) is a new iterative reconstruction technique for tomographic images that can reduce the reconstruction time comparing with conventional iteration method. We adopted this method of RBC blood pool SPECT and tried to validate the usefulness of OSEM in detection of liver hemangioma comparing with filtered back projection (FBP). A 64 projection SPECT study was acquired over 360 .deg. C by dual-head cameras after the injection of 750MBq of {sup 99m}Tc-RBC. OSEM was performed with various condition of subset (1,2,4,8,16 and 32) and iteration number (1,2,4,8 and 16) to obtain the best set for lesion detection. OSEM underwent in 17 lesions of 15 patients with liver hemangioma and compared with FBP images. Two nuclear medicine physicians reviewed these results independently. Best set for images was 4 iteration and 16 subset. In general, OSEM revealed more homogeneous images than FBP. Eighty-eight percent (15/17) of OSEM images were superior or equal to FBP for anatomic resolution. According to the blind review of images 70.5% (12/17) of OSEM was better in contrast (4/17), anatomic detail (4/17) and both (2/17). Two small lesions were detected by OSEM only and another 2 small lesions were failed to depict in both methods. Remaining 3 lesions revealed no difference in image quality. OSEM can provide better image quality as well as better results in detection of liver hemangioma than conventional FBP technique.
2014-01-01
Background Population genetics and association studies usually rely on a set of known variable sites that are then genotyped in subsequent samples, because it is easier to genotype than to discover the variation. This is also true for structural variation detected from sequence data. However, the genotypes at known variable sites can only be inferred with uncertainty from low coverage data. Thus, statistical approaches that infer genotype likelihoods, test hypotheses, and estimate population parameters without requiring accurate genotypes are becoming popular. Unfortunately, the current implementations of these methods are intended to analyse only single nucleotide and short indel variation, and they usually assume that the two alleles in a heterozygous individual are sampled with equal probability. This is generally false for structural variants detected with paired ends or split reads. Therefore, the population genetics of structural variants cannot be studied, unless a painstaking and potentially biased genotyping is performed first. Results We present svgem, an expectation-maximization implementation to estimate allele and genotype frequencies, calculate genotype posterior probabilities, and test for Hardy-Weinberg equilibrium and for population differences, from the numbers of times the alleles are observed in each individual. Although applicable to single nucleotide variation, it aims at bi-allelic structural variation of any type, observed by either split reads or paired ends, with arbitrarily high allele sampling bias. We test svgem with simulated and real data from the 1000 Genomes Project. Conclusions svgem makes it possible to use low-coverage sequencing data to study the population distribution of structural variants without having to know their genotypes. Furthermore, this advance allows the combined analysis of structural and nucleotide variation within the same genotype-free statistical framework, thus preventing biases introduced by genotype
Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, 1111 Highland Ave, Madison, WI 53705 (United States); Flynn, Ryan T [Department of Radiation Oncology, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, Iowa City, IA 52245 (United States)], E-mail: barbee.david@gmail.com
2010-01-07
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of {+-}30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV
Rutledge John
2011-05-01
Full Text Available Abstract Background Standard mean imputation for missing values in the Western Ontario and Mc Master (WOMAC Osteoarthritis Index limits the use of collected data and may lead to bias. Probability model-based imputation methods overcome such limitations but were never before applied to the WOMAC. In this study, we compare imputation results for the Expectation Maximization method (EM and the mean imputation method for WOMAC in a cohort of total hip replacement patients. Methods WOMAC data on a consecutive cohort of 2062 patients scheduled for surgery were analyzed. Rates of missing values in each of the WOMAC items from this large cohort were used to create missing patterns in the subset of patients with complete data. EM and the WOMAC's method of imputation are then applied to fill the missing values. Summary score statistics for both methods are then described through box-plot and contrasted with the complete case (CC analysis and the true score (TS. This process is repeated using a smaller sample size of 200 randomly drawn patients with higher missing rate (5 times the rates of missing values observed in the 2062 patients capped at 45%. Results Rate of missing values per item ranged from 2.9% to 14.5% and 1339 patients had complete data. Probability model-based EM imputed a score for all subjects while WOMAC's imputation method did not. Mean subscale scores were very similar for both imputation methods and were similar to the true score; however, the EM method results were more consistent with the TS after simulation. This difference became more pronounced as the number of items in a subscale increased and the sample size decreased. Conclusions The EM method provides a better alternative to the WOMAC imputation method. The EM method is more accurate and imputes data to create a complete data set. These features are very valuable for patient-reported outcomes research in which resources are limited and the WOMAC score is used in a multivariate
Vanhove, C.; Franken, P.R.; Everaert, H.; Bossuyt, A. [Div. of Nuclear Medicine, University Hospital, Free University of Brussels (AZ VUB), Brussels (Belgium); Defrise, M.; Deconinck, F. [Division of Experimental Medical Imaging, Free University of Brussels (VUB), Brussels (Belgium)
2000-02-01
Pinhole single-photon emission tomography (SPET) has been proposed to improve the trade-off between sensitivity and resolution for small organs located in close proximity to the pinhole aperture. This technique is hampered by artefacts in the non-central slices. These artefacts are caused by truncation and by the fact that the pinhole SPET data collected in a circular orbit do not contain sufficient information for exact reconstruction. The ordered subsets expectation maximization (OS-EM) algorithm is a potential solution to these problems. In this study a three-dimensional OS-EM algorithm was implemented for data acquired on a single-head gamma camera equipped with a pinhole collimator (PH OS-EM). The aim of this study was to compare the PH OS-EM algorithm with the filtered back-projection algorithm of Feldkamp, Davis and Kress (FDK) and with the conventional parallel-hole geometry as a whole, using a line source phantom, Picker's thyroid phantom and a phantom mimicking the human cervical column. Correction for the angular dependency of the sensitivity in the pinhole geometry was based on a uniform flood acquisition. The projection data were shifted according to the measured centre of rotation. No correction was made for attenuation, scatter or distance-dependent camera resolution. The resolution measured with the line source phantom showed a significant improvement with PH OS-EM as compared with FDK, especially in the axial direction. Using Picker's thyroid phantom, one iteration with eight subsets was sufficient to obtain images with similar noise levels in uniform regions of interest to those obtained with the FDK algorithm. With these parameters the reconstruction time was 2.5 times longer than for the FDK method. Furthermore, there was a reduction in the artefacts caused by the circular orbit SPET acquisition. The images obtained from the phantom mimicking the human cervical column indicated that the improvement in image quality with PH OS-EM is
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated
From Wald to Savage: homo economicus becomes a Bayesian statistician
Giocoli, Nicola
2011-01-01
Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayes’s rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic im...
Brunnermeier, Markus K.; Jonathan A. Parker
2004-01-01
This Paper introduces a tractable, structural model of subjective beliefs. Forward-looking agents care about expected future utility flows, and hence have higher current felicity if they believe that better outcomes are more likely. On the other hand, biased expectations lead to poorer decisions and worse realized outcomes on average. Optimal expectations balance these forces by maximizing average felicity. A small bias in beliefs typically leads to first-order gains due to increased anticipa...
Bayesian state estimation using generalized coordinates
Balaji, Bhashyam; Friston, Karl
2011-06-01
This paper reviews a simple solution to the continuous-discrete Bayesian nonlinear state estimation problem that has been proposed recently. The key ideas are analytic noise processes, variational Bayes, and the formulation of the problem in terms of generalized coordinates of motion. Some of the algorithms, specifically dynamic expectation maximization and variational filtering, have been shown to outperform existing approaches like extended Kalman filtering and particle filtering. A pedagogical review of the theoretical formulation is presented, with an emphasis on concepts that are not as widely known in the filtering literature. We illustrate the appliction of these concepts using a numerical example.
A Bayesian Probabilistic Framework for Rain Detection
Chen Yao
2014-06-01
Full Text Available Heavy rain deteriorates the video quality of outdoor imaging equipments. In order to improve video clearness, image-based and sensor-based methods are adopted for rain detection. In earlier literature, image-based detection methods fall into spatio-based and temporal-based categories. In this paper, we propose a new image-based method by exploring spatio-temporal united constraints in a Bayesian framework. In our framework, rain temporal motion is assumed to be Pathological Motion (PM, which is more suitable to time-varying character of rain steaks. Temporal displaced frame discontinuity and spatial Gaussian mixture model are utilized in the whole framework. Iterated expectation maximization solving method is taken for Gaussian parameters estimation. Pixels state estimation is finished by an iterated optimization method in Bayesian probability formulation. The experimental results highlight the advantage of our method in rain detection.
Bayesian Network Based Fault Prognosis via Bond Graph Modeling of High-Speed Railway Traction Device
Yunkai Wu
2015-01-01
component-level faults accurately for a high-speed railway traction system, a fault prognosis approach via Bayesian network and bond graph modeling techniques is proposed. The inherent structure of a railway traction system is represented by bond graph model, based on which a multilayer Bayesian network is developed for fault propagation analysis and fault prediction. For complete and incomplete data sets, two different parameter learning algorithms such as Bayesian estimation and expectation maximization (EM algorithm are adopted to determine the conditional probability table of the Bayesian network. The proposed prognosis approach using Pearl’s polytree propagation algorithm for joint probability reasoning can predict the failure probabilities of leaf nodes based on the current status of root nodes. Verification results in a high-speed railway traction simulation system can demonstrate the effectiveness of the proposed approach.
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm
Sparse Bayesian learning in ISAR tomography imaging
SU Wu-ge; WANG Hong-qiang; DENG Bin; WANG Rui-jun; QIN Yu-liang
2015-01-01
Inverse synthetic aperture radar (ISAR) imaging can be regarded as a narrow-band version of the computer aided tomography (CT). The traditional CT imaging algorithms for ISAR, including the polar format algorithm (PFA) and the convolution back projection algorithm (CBP), usually suffer from the problem of the high sidelobe and the low resolution. The ISAR tomography image reconstruction within a sparse Bayesian framework is concerned. Firstly, the sparse ISAR tomography imaging model is established in light of the CT imaging theory. Then, by using the compressed sensing (CS) principle, a high resolution ISAR image can be achieved with limited number of pulses. Since the performance of existing CS-based ISAR imaging algorithms is sensitive to the user parameter, this makes the existing algorithms inconvenient to be used in practice. It is well known that the Bayesian formalism of recover algorithm named sparse Bayesian learning (SBL) acts as an effective tool in regression and classification, which uses an efficient expectation maximization procedure to estimate the necessary parameters, and retains a preferable property of thel0-norm diversity measure. Motivated by that, a fully automated ISAR tomography imaging algorithm based on SBL is proposed. Experimental results based on simulated and electromagnetic (EM) data illustrate the effectiveness and the superiority of the proposed algorithm over the existing algorithms.
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
Sparse Bayesian Learning for DOA Estimation with Mutual Coupling
Jisheng Dai
2015-10-01
Full Text Available Sparse Bayesian learning (SBL has given renewed interest to the problem of direction-of-arrival (DOA estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs. Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
Kadane, Joseph B.
2009-01-01
This paper reviews the maxims used by three early modern fictional detectives: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. It find similarities between these maxims and Bayesian thought. Poe's Dupin uses ideas very similar to Bayesian game theory. Sherlock Holmes' statements also show thought patterns justifiable in Bayesian terms.
Kadane, Joseph B
2010-01-01
This paper reviews the maxims used by three early modern fictional detectives: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. It find similarities between these maxims and Bayesian thought. Poe's Dupin uses ideas very similar to Bayesian game theory. Sherlock Holmes' statements also show thought patterns justifiable in Bayesian terms.
Draper, D.
2001-01-01
© 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography
Regularized variational Bayesian learning of echo state networks with delay&sum readout.
Shutin, Dmitriy; Zechner, Christoph; Kulkarni, Sanjeev R; Poor, H Vincent
2012-04-01
In this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The variational approach allows for a seamless combination of sparse Bayesian learning ideas and a variational Bayesian space-alternating generalized expectation-maximization (VB-SAGE) algorithm for estimating parameters of superimposed signals. While the former method realizes automatic regularization of ESNs, which also determines which echo states and input signals are relevant for "explaining" the desired signal, the latter method provides a basis for joint estimation of D&S readout parameters. The proposed training algorithm can naturally be extended to ESNs with fixed filter neurons. It also generalizes the recently proposed expectation-maximization-based D&S readout adaptation method. The proposed algorithm was tested on synthetic data prediction tasks as well as on dynamic handwritten character recognition. PMID:22168555
雷明; 韩崇昭
2006-01-01
A novel method under the interactive multiple model (IMM) filtering framework is presented in this paper, in which the expectation-maximization (EM) algorithm is used to identify the process noise covariance Q online. For the existing IMM filtering theory, the matrix Q is determined by means of design experience, but Q is actually changed with the state of the maneuvering target.Meanwhile it is severely influenced by the environment around the target, i.e., it is a variable of time. Therefore, the experiential covariance Q can not represent the influence of state noise in the maneuvering process exactly. Firstly, it is assumed that the evolved state and the initial conditions of the system can be modeled by using Gaussian distribution, although the dynamic system is of a nonlinear measurement equation, and furthermore the EM algorithm based on IMM filtering with the Q identification online is proposed. Secondly, the truncated error analysis is performed. Finally,the Monte Carlo simulation results are given to show that the proposed algorithm outperforms the existing algorithms and the tracking precision for the maneuvering targets is improved efficiently.
侯涛; 汪源源; 郭翌
2011-01-01
In view of inherent speckle noise in medical images, a de-speckling method was proposed based on an expectation maximization (EM) framework. Firstly, the real part was extracted from the Inphase/Quadrature (I/Q)ultrasound image. Then, the point spread function was blindly estimated from the real image. Lastly, based on the EM framework, an iterative algorithm alternating between Wiener filtering and anisotropic diffusion was exploited to produce de-speckled images. The comparison experiment was carried out on both simulated and in vitro ultrasound images using the proposed method and exited ones, respectively. It was shown that the proposed method averagely improved the speckle-signal-to-noise Ratio (S-SNR) and the edge preservation index (β) of I/Q images by the factor of 1.94 and 7.52. Meanwhile, it averagely reduced the normalized mean-squared error (NMSE) by the factor of 3.95. The simulation and in vitro results indicated that the proposed method had a better overall performance than exited ones.%针对医学超声图像斑点噪声,提出一种基于期望最大化(EM)框架的去斑算法.先从超声I/Q图像中提取实部;然后从该实部图像中"盲估计"出系统的点扩散函数;最后利用EM算法,在维纳滤波和各向异性扩散间进行迭代,从而获得去斑后的超声图像.对不同信噪比的仿真图像和实际图像采用本文方法和现有方法进行比较实验,结果表明,采用本文方法可将超声图像的斑点信噪比和边界保留指数平均提高1.94和7.52倍,归一化均方差平均降低3.95倍,性能指标优于现有方法.
Kirstein, Roland
2005-01-01
This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...
On maximal massive 3D supergravity
Bergshoeff, Eric A; Rosseel, Jan [Centre for Theoretical Physics, University of Groningen, Nijenborgh 4, 9747 AG Groningen (Netherlands); Hohm, Olaf [Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Townsend, Paul K, E-mail: E.A.Bergshoeff@rug.n, E-mail: ohohm@mit.ed, E-mail: j.rosseel@rug.n, E-mail: P.K.Townsend@damtp.cam.ac.u [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)
2010-12-07
We construct, at the linearized level, the three-dimensional (3D) N=4 supersymmetric 'general massive supergravity' and the maximally supersymmetric N=8 'new massive supergravity'. We also construct the maximally supersymmetric linearized N=7 topologically massive supergravity, although we expect N=6 to be maximal at the nonlinear level.
On Maximal Massive 3D Supergravity
Bergshoeff, Eric A; Rosseel, Jan; Townsend, Paul K
2010-01-01
We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric "general massive supergravity" and the maximally supersymmetric N = 8 "new massive supergravity". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level.
On maximal massive 3D supergravity
We construct, at the linearized level, the three-dimensional (3D) N=4 supersymmetric 'general massive supergravity' and the maximally supersymmetric N=8 'new massive supergravity'. We also construct the maximally supersymmetric linearized N=7 topologically massive supergravity, although we expect N=6 to be maximal at the nonlinear level.
Profit Maximization over Social Networks
Lu, Wei; Lakshmanan, Laks V. S.
2012-01-01
Influence maximization is the problem of finding a set of influential users in a social network such that the expected spread of influence under a certain propagation model is maximized. Much of the previous work has neglected the important distinction between social influence and actual product adoption. However, as recognized in the management science literature, an individual who gets influenced by social acquaintances may not necessarily adopt a product (or technology), due, e.g., to mone...
Asplund, Björn Marcus
2007-01-01
This paper aims at testing the maintained assumption that firms' objective is to maximize the expected net present value (ENPV) of profits. The idea is to examine pricing behaviour of a monopolist facing a dynamic demand where current sales influence future demand. Empirically, I estimate an Euler equation implied by maximization of ENPV of profits on data from the Swedish Tobacco Monopoly's sales of moist snuff (an addictive tobacco product) during the period 1917-1959. It is found that the ...
Stochastic Expectation Propagation
Li, Yingzhen; Hernandez-Lobato, Jose Miguel; Turner, Richard E.
2015-01-01
Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it...
Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel
2013-01-01
Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean
Hartelius, Karsten; Carstensen, Jens Michael
2003-01-01
A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...
BayesLCA: An R Package for Bayesian Latent Class Analysis
Arthur White
2014-11-01
Full Text Available The BayesLCA package for R provides tools for performing latent class analysis within a Bayesian setting. Three methods for fitting the model are provided, incorporating an expectation-maximization algorithm, Gibbs sampling and a variational Bayes approximation. The article briefly outlines the methodology behind each of these techniques and discusses some of the technical difficulties associated with them. Methods to remedy these problems are also described. Visualization methods for each of these techniques are included, as well as criteria to aid model selection.
Zhang, Le; Karakci, Ata; Korotkov, Andrei; Sutter, P M; Timbie, Peter T; Tucker, Gregory S; Wandelt, Benjamin D
2016-01-01
We present in this paper a new Bayesian semi-blind approach for foreground removal in observations of the 21-cm signal with interferometers. The technique, which we call HIEMICA (HI Expectation-Maximization Independent Component Analysis), is an extension of the Independent Component Analysis (ICA) technique developed for two-dimensional (2D) CMB maps to three-dimensional (3D) 21-cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from signal based on the diversity of their power spectra. Only relying on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21-cm signal and, the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21-cm intensity mapping observations. Based on ...
Lin, Lin; Chan, Cliburn; West, Mike
2016-01-01
We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets. PMID:26040910
A Nonparametric Bayesian Approach For Emission Tomography Reconstruction
We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution--normalized emission intensity of the spatial poisson process--is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on Rk (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM
A Nonparametric Bayesian Approach For Emission Tomography Reconstruction
Barat, Éric; Dautremer, Thomas
2007-11-01
We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution—normalized emission intensity of the spatial poisson process—is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on Rk (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM.
Maximizing System Throughput by Cooperative Sensing in Cognitive Radio Networks
Li, Shuang; Ekici, Eylem; Shroff, Ness
2011-01-01
Cognitive Radio Networks allow unlicensed users to opportunistically access the licensed spectrum without causing disruptive interference to the primary users (PUs). One of the main challenges in CRNs is the ability to detect PU transmissions. Recent works have suggested the use of secondary user (SU) cooperation over individual sensing to improve sensing accuracy. In this paper, we consider a CRN consisting of a single PU and multiple SUs to study the problem of maximizing the total expected system throughput. We propose a Bayesian decision rule based algorithm to solve the problem optimally with a constant time complexity. To prioritize PU transmissions, we re-formulate the throughput maximization problem by adding a constraint on the PU throughput. The constrained optimization problem is shown to be NP-hard and solved via a greedy algorithm with pseudo-polynomial time complexity that achieves strictly greater than 1/2 of the optimal solution. We also investigate the case for which a constraint is put on th...
Bayesian artificial intelligence
Korb, Kevin B
2010-01-01
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes and...... largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...
Bayesian Network--Response Regression
WANG, LU; Durante, Daniele; Dunson, David B.
2016-01-01
There is an increasing interest in learning how human brain networks vary with continuous traits (e.g., personality, cognitive abilities, neurological disorders), but flexible procedures to accomplish this goal are limited. We develop a Bayesian semiparametric model, which combines low-rank factorizations and Gaussian process priors to allow flexible shifts of the conditional expectation for a network-valued random variable across the feature space, while including subject-specific random eff...
Bayesian Nonparametric Clustering for Positive Definite Matrices.
Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2016-05-01
Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms. PMID:27046838
Optimal Bayesian Experimental Design for Combustion Kinetics
Huan, Xun
2011-01-04
Experimental diagnostics play an essential role in the development and refinement of chemical kinetic models, whether for the combustion of common complex hydrocarbons or of emerging alternative fuels. Questions of experimental design—e.g., which variables or species to interrogate, at what resolution and under what conditions—are extremely important in this context, particularly when experimental resources are limited. This paper attempts to answer such questions in a rigorous and systematic way. We propose a Bayesian framework for optimal experimental design with nonlinear simulation-based models. While the framework is broadly applicable, we use it to infer rate parameters in a combustion system with detailed kinetics. The framework introduces a utility function that reflects the expected information gain from a particular experiment. Straightforward evaluation (and maximization) of this utility function requires Monte Carlo sampling, which is infeasible with computationally intensive models. Instead, we construct a polynomial surrogate for the dependence of experimental observables on model parameters and design conditions, with the help of dimension-adaptive sparse quadrature. Results demonstrate the efficiency and accuracy of the surrogate, as well as the considerable effectiveness of the experimental design framework in choosing informative experimental conditions.
A new approach for Bayesian model averaging
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B
2013-01-01
FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear
Yuan, Ying; MacKinnon, David P.
2009-01-01
This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...
吴全珍
2011-01-01
The Idesia polycarpa Maxim. var. vestita Diels belongs to wild fallen - leaf arbor, and it can be used as the oil -producing and lumber -supplying species. The oil from its fruit can be used as the raw materials for medicine, chemical and plant energy industry. The fruit output of this tree is large, and the oil content is high. The lumber is suitable for the furniture manufacture and board house. In addition, this tree is the favorable choice for environmental greening. The developing and using course were reviewed,and its future was prospected. The scientific research on its reproduction, cultivation, picking, storage, oil extraction, oil refining and deep processing of the oil products should be strengthened. Particularly, it was important to effectively get rid of oil's bitterness and unpleasant smell for its development and usage.%毛叶山桐子是野生落叶乔木,油、材兼用树种.果实中的油脂组分属健康营养食用油,是医药、化工和植物油能源原料.果实产量、含油率都较高,其木材是制造家俱和木板房的良材,该树也是绿化环境的优良树种.回顾了我国开发利用毛叶山桐子的历程,展望它的发展,建议在大规模种植的同时加强繁殖、栽培、采摘、储存、油脂提取、油脂精炼及油脂产品深加工及副产品综合利用等科学研究工作.特别是创新有效地去除油脂不良气味工艺是开发利用毛叶山桐子是否成功的关键.
Bayesian Games with Intentions
Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael
2016-01-01
We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.
A new perspective to rational expectations: Maximin rational expectations equilibrium
Castro, Luciano I.; Pesce, Marialaura; Nicholas C. Yannelis
2010-01-01
We introduce a new notion of rational expectations equilibrium (REE) called maximin rational expectations equilibrium (MREE), which is based on the maximin expected utility (MEU) formulation. In particular, agents maximize maximin expected utility conditioned on their own private information and the information that the equilibrium prices generate. Maximin equilibrium allocations need not to be measurable with respect to the private information of each individual and with respect to the infor...
SAR imaging via iterative adaptive approach and sparse Bayesian learning
Xue, Ming; Santiago, Enrique; Sedehi, Matteo; Tan, Xing; Li, Jian
2009-05-01
We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected data. This paper characterizes the achievable performance of these two approaches by processing the complex backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.
Bayesian analysis for extreme climatic events: A review
Chu, Pao-Shin; Zhao, Xin
2011-11-01
This article reviews Bayesian analysis methods applied to extreme climatic data. We particularly focus on applications to three different problems related to extreme climatic events including detection of abrupt regime shifts, clustering tropical cyclone tracks, and statistical forecasting for seasonal tropical cyclone activity. For identifying potential change points in an extreme event count series, a hierarchical Bayesian framework involving three layers - data, parameter, and hypothesis - is formulated to demonstrate the posterior probability of the shifts throughout the time. For the data layer, a Poisson process with a gamma distributed rate is presumed. For the hypothesis layer, multiple candidate hypotheses with different change-points are considered. To calculate the posterior probability for each hypothesis and its associated parameters we developed an exact analytical formula, a Markov Chain Monte Carlo (MCMC) algorithm, and a more sophisticated reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm. The algorithms are applied to several rare event series: the annual tropical cyclone or typhoon counts over the central, eastern, and western North Pacific; the annual extremely heavy rainfall event counts at Manoa, Hawaii; and the annual heat wave frequency in France. Using an Expectation-Maximization (EM) algorithm, a Bayesian clustering method built on a mixture Gaussian model is applied to objectively classify historical, spaghetti-like tropical cyclone tracks (1945-2007) over the western North Pacific and the South China Sea into eight distinct track types. A regression based approach to forecasting seasonal tropical cyclone frequency in a region is developed. Specifically, by adopting large-scale environmental conditions prior to the tropical cyclone season, a Poisson regression model is built for predicting seasonal tropical cyclone counts, and a probit regression model is alternatively developed toward a binary classification problem. With a non
Profit maximization mitigates competition
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price co...
Maximally incompatible quantum observables
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Maximally incompatible quantum observables
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Attention in a bayesian framework
Whiteley, Louise Emma; Sahani, Maneesh
2012-01-01
include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its......The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of...
Tactile length contraction as Bayesian inference.
Tong, Jonathan; Ngo, Vy; Goldreich, Daniel
2016-08-01
To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process. PMID:27121574
Phenomenology of Maximal and Near-Maximal Lepton Mixing
González-Garciá, M Concepción; Nir, Yosef; Smirnov, Yu A
2001-01-01
We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other ($x$=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter $\\epsilon\\equiv1-2\\sin^2\\theta_{ex}$ and quantify the present experimental status for $|\\epsilon|<0.3$. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for $10^{-8}$ eV$^2\\lsim\\Delta m^2\\lsim2\\times10^{-7}$ eV$^2$. In the mass ranges $\\Delta m^2\\gsim 1.5\\times10^{-5}$ eV$^2$ and $4\\times10^{-10}$ eV$^2\\lsim\\Delta m^2\\lsim2\\times10^{-7}$ eV$^2$ the full interval $|\\epsilon|<0.3$ is allowed within 4$\\sigma$. We suggest ways to measure $\\epsilon$ in future experiments. The observable that is most sensitive to $\\epsilon$ is the rate [NC]/[CC] in combination with the Day-Night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is $\\Delta \\epsilon\\sim 0.07$. We also...
Raymond, Jack; Manoel, Andre; Opper, Manfred
2014-01-01
Variational inference is a powerful concept that underlies many iterative approximation algorithms; expectation propagation, mean-field methods and belief propagations were all central themes at the school that can be perceived from this unifying framework. The lectures of Manfred Opper introduce the archetypal example of Expectation Propagation, before establishing the connection with the other approximation methods. Corrections by expansion about the expectation propagation are then explain...
Nash, Ulrik William
2014-01-01
The concept of evolutionary expectations descends from cue learning psychology, synthesizing ideas on rational expectations with ideas on bounded rationality, to provide support for these ideas simultaneously. Evolutionary expectations are rational, but within cognitive bounds. Moreover, they are...... cognitive bounds will perceive business opportunities identically. In addition, because cues provide information about latent causal structures of the environment, changes in causality must be accompanied by changes in cognitive representations if adaptation is to be maintained. The concept of evolutionary...
On-line Bayesian System Identification
Romeres, Diego; Prando, Giulia; Pillonetto, Gianluigi; Chiuso, Alessandro
2016-01-01
We consider an on-line system identification setting, in which new data become available at given time steps. In order to meet real-time estimation requirements, we propose a tailored Bayesian system identification procedure, in which the hyper-parameters are still updated through Marginal Likelihood maximization, but after only one iteration of a suitable iterative optimization algorithm. Both gradient methods and the EM algorithm are considered for the Marginal Likelihood optimization. We c...
Karlson, Kristian Bernt
stratification, I argue that students facing significant educational transitions form their educational expectations by taking into account the foreseeable, yet inherently uncertain, consequences of potential educational pathways. This process of expectation formation, I posit, involves evaluations of the...... relation between the self and educational prospects; evaluations that are socially bounded in that students take their family's social position into consideration when forming their educational expectations. One important consequence of this learning process is that equally talented students tend to make...... different educational choices according to their family background. IEO thus appears to be mediated by the expectations students hold for their futures. Taken together, this research agenda argues that both researchers and policy-makers need to consider the expectation-based origin of educational...
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Rubin, Donald B.
1981-01-01
The Bayesian bootstrap is the Bayesian analogue of the bootstrap. Instead of simulating the sampling distribution of a statistic estimating a parameter, the Bayesian bootstrap simulates the posterior distribution of the parameter; operationally and inferentially the methods are quite similar. Because both methods of drawing inferences are based on somewhat peculiar model assumptions and the resulting inferences are generally sensitive to these assumptions, neither method should be applied wit...
Hall, Elisabeth O C; Aagaard, Hanne; Larsen, Jette Schilling
2008-01-01
was conducted using Noblit and Hare’s methodological approach. Results: The metasynthesis shows that confidence in breastfeeding is shaped by shattered expectations and is affected on an immediate level by mothers’ expectations, the network and the breastfeeding experts and on a discourse level by the...... mothers who do not breastfeed and by organising knowledge about breastfeeding in a specific way. Conclusion: The individual mother is responsible for the success of breastfeeding and the discourses are hiding that general perceptions and descriptions of breastfeeding undermines the mothers´ confidence in...... breastfeeding and leads to shattered expectations....
Two Expectation-Maximization Algorithms for Boolean Factor Analysis
Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.
2014-01-01
Roč. 130, 23 April (2014), s. 83-97. ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant ostatní: GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Exploiting correlation and budget constraints in Bayesian multi-armed bandit optimization
Hoffman, Matthew W.; Shahriari, Bobak; De Freitas, Nando
2013-01-01
We address the problem of finding the maximizer of a nonlinear smooth function, that can only be evaluated point-wise, subject to constraints on the number of permitted function evaluations. This problem is also known as fixed-budget best arm identification in the multi-armed bandit literature. We introduce a Bayesian approach for this problem and show that it empirically outperforms both the existing frequentist counterpart and other Bayesian optimization methods. The Bayesian approach place...
Zakrzewski, Sonia
2015-01-01
We give simple upper and lower bounds on life expectancy. In a life-table population, if e(0) is the life expectancy at birth, M is the median length of life, and e(M) is the expected remaining life at age M, then (M+e(M))/2≤e(0)≤M+e(M)/2. In general, for any age x, if e(x) is the expected remaining life at age x, and ℓ(x) is the fraction of a cohort surviving to age x at least, then (x+e(x))≤l(x)≤e(0)≤x+l(x)∙e(x). For any two ages 0≤w≤x≤ω, (x-w+e(x))∙ℓ(x)/ℓ(w)≤e(w)≤x-w+e(x)∙ℓ(x)/ℓ(w) . These...
Basit Zafar; Theresa Kuchler
2015-01-01
We use novel survey panel data to estimate how personal experiences affect expectations about aggregate economic outcomes in housing and labor markets. We exploit cross-sectional and time series variation in differences in locally experienced house prices to show that respondents systematically extrapolate from personally experienced home prices when asked for their expectations about US house price development. In addition, higher volatility of locally experienced house prices causes respond...
Ippei Fujiwara
2008-01-01
For a long time, changes in expectations about the future have been thought to be significant sources of economic fluctuations, as argued by Pigou (1926). Although creating such an expectation-driven cycle (the Pigou cycle) in equilibrium business cycle models was considered to be a difficult challenge, as pointed out by Barro and King (1984), recently, several researchers have succeeded in producing the Pigou cycle by balancing the tension between the wealth effect and the substitution effec...
Ming Yi WANG; Guo ZHAO
2005-01-01
A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
The Minimum Expectation Selection Problem
Eppstein, David; Lueker, George
2001-01-01
We define the min-min expectation selection problem (resp. max-min expectation selection problem) to be that of selecting k out of n given discrete probability distributions, to minimize (resp. maximize) the expectation of the minimum value resulting when independent random variables are drawn from the selected distributions. We assume each distribution has finitely many atoms. Let d be the number of distinct values in the support of the distributions. We show that if d is a constant greater ...
Bayesian second law of thermodynamics
Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason
2016-08-01
We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.
Quantum Inference on Bayesian Networks
Yoder, Theodore; Low, Guang Hao; Chuang, Isaac
2014-03-01
Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.
Asymptotics of robust utility maximization
Knispel, Thomas
2012-01-01
For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.
Frühwirth-Schnatter, Sylvia
1990-01-01
In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
Fuhrmann, R; Torres, F; Fuhrmann, Rainer; Garcia, Arnaldo; Torres, Fernando
1996-01-01
We study arithmetical and geometrical properties of maximal curves, that is, curves defined over the finite field F_{q^2} whose number of F_{q^2}-rational points reaches the Hasse-Weil upper bound. Under a hypothesis on non-gaps at a rational point, we prove that maximal curves are F_{q^2}-isomorphic to y^q + y = x^m, for some $m \\in Z^+$. As a consequence we show that a maximal curve of genus g=(q-1)^2/4 is F_{q^2}-isomorphic to the curve y^q + y = x^{(q+1)/2}.
Holm, Claus
Young Australians’ post-school futures are uncertain, insecure and fluid in relation to working life. But if you think that this is the recipe for a next generation of depressed young Australians, you may be wrong. A new book documents that young people are characterised by optimism, but their ex...... expectations of the future differ from those of their parents....
Dickens, Charles
2005-01-01
One of Dickens's most renowned and enjoyable novels, Great Expectations tells the story of Pip, an orphan boy who wishes to transcend his humble origins and finds himself unexpectedly given the opportunity to live a life of wealth and respectability. Over the course of the tale, in which Pip encount
无
2006-01-01
The past year marks robust economic growth for Latin America and rapid development in cooperation with China. The future in this partnership looks bright Latin America's economy is expected to grow by 4.3 percent in 2005, according to the projection of the Economic Commission for Latin America and the Caribbean. This fig-
Rudiger Bubner
1998-12-01
Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.
Survival versus Profit Maximization in a Dynamic Stochastic Experiment
Ryan Oprea
2014-01-01
Subjects in a laboratory experiment withdraw earnings from a cash reserve evolving according to an arithmetic Brownian motion in near‐continuous time. Aggressive withdrawal policies expose subjects to risk of bankruptcy, but the policy that maximizes expected earnings need not maximize the odds of survival. When profit maximization is consistent with high rates of survival (HS parameters), subjects adjust decisively towards the optimum. When survival and profit maximization are sharply at odd...
From Wald to Savage: homo economicus becomes a Bayesian statistician.
Giocoli, Nicola
2013-01-01
Bayesian rationality is the paradigm of rational behavior in neoclassical economics. An economic agent is deemed rational when she maximizes her subjective expected utility and consistently revises her beliefs according to Bayes's rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is of great historiographic importance. The story begins with Abraham Wald's behaviorist approach to statistics and culminates with Leonard J. Savage's elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. The latter's acknowledged fiasco to achieve a reinterpretation of traditional inference techniques along subjectivist and behaviorist lines raises the puzzle of how a failed project in statistics could turn into such a big success in economics. Possible answers call into play the emphasis on consistency requirements in neoclassical theory and the impact of the postwar transformation of U.S. business schools. PMID:23165740
Simulation-based optimal Bayesian experimental design for nonlinear systems
Huan, Xun
2013-01-01
The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.
Bayesian ensemble refinement by replica simulations and reweighting
Hummer, Gerhard
2015-01-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We find that the strength of the restraint scales with the number of replicas and we show that this sca...
Bayesian ensemble refinement by replica simulations and reweighting
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Gaoning He
2010-01-01
Full Text Available A Bayesian game-theoretic model is developed to design and analyze the resource allocation problem in K-user fading multiple access channels (MACs, where the users are assumed to selfishly maximize their average achievable rates with incomplete information about the fading channel gains. In such a game-theoretic study, the central question is whether a Bayesian equilibrium exists, and if so, whether the network operates efficiently at the equilibrium point. We prove that there exists exactly one Bayesian equilibrium in our game. Furthermore, we study the network sum-rate maximization problem by assuming that the users coordinate according to a symmetric strategy profile. This result also serves as an upper bound for the Bayesian equilibrium. Finally, simulation results are provided to show the network efficiency at the unique Bayesian equilibrium and to compare it with other strategies.
Granade, Christopher; Cory, D G
2015-01-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Social optimality in quantum Bayesian games
Iqbal, Azhar; Chappell, James M.; Abbott, Derek
2015-10-01
A significant aspect of the study of quantum strategies is the exploration of the game-theoretic solution concept of the Nash equilibrium in relation to the quantization of a game. Pareto optimality is a refinement on the set of Nash equilibria. A refinement on the set of Pareto optimal outcomes is known as social optimality in which the sum of players' payoffs is maximized. This paper analyzes social optimality in a Bayesian game that uses the setting of generalized Einstein-Podolsky-Rosen experiments for its physical implementation. We show that for the quantum Bayesian game a direct connection appears between the violation of Bell's inequality and the social optimal outcome of the game and that it attains a superior socially optimal outcome.
Bayesian exploratory factor analysis
Gabriella Conti; Sylvia Frühwirth-Schnatter; James Heckman; Rémi Piatek
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identifi cation criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study c...
Bayesian Exploratory Factor Analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...
Bayesian Exploratory Factor Analysis
Gabriella Conti; Sylvia Fruehwirth-Schnatter; Heckman, James J.; Remi Piatek
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on \\emph{ad hoc} classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo s...
Bayesian exploratory factor analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo st...
Bayesian exploratory factor analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...
Carbonetto, Peter; Kisynski, Jacek; De Freitas, Nando; Poole, David L
2012-01-01
The Bayesian Logic (BLOG) language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends BLOG by adopting generative processes over function spaces - known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By expl...
Bayesian default probability models
Andrlíková, Petra
2014-01-01
This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more acc...
Inverse Problems in a Bayesian Setting
Matthies, Hermann G.
2016-02-13
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)—the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. We give a detailed account of this approach via conditional approximation, various approximations, and the construction of filters. Together with a functional or spectral approach for the forward UQ there is no need for time-consuming and slowly convergent Monte Carlo sampling. The developed sampling-free non-linear Bayesian update in form of a filter is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisation to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and nonlinear Bayesian update in form of a filter on some examples.
Maximal avalanches in the Bak-Sneppen model
Gillett, Alexis; Meester, Ronald; van der Wal, Peter
2006-01-01
We study the durations of the avalanches in the maximal avalanche decomposition of the Bak-Sneppen evolution model. We show that all the avalanches in this maximal decomposition have infinite expectation, but only `barely', in the sense that if we made the appropriate threshold a tiny amount smaller (in a certain sense), then the avalanches would have finite expectation. The first of these results is somewhat surprising, since simulations suggest finite expectations.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...
Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin
2016-05-01
The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.
Learning what to expect (in visual perception)
Seriès, Peggy; Seitz, Aaron R.
2013-01-01
Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over tim...
Bayesian Inference and Optimal Design in the Sparse Linear Model
Seeger, Matthias; Steinke, Florian; Tsuda, Koji
2007-01-01
The sparse linear model has seen many successful applications in Statistics, Machine Learning, and Computational Biology, such as identification of gene regulatory networks from micro-array expression data. Prior work has either approximated Bayesian inference by expensive Markov chain Monte Carlo, or replaced it by point estimation. We show how to obtain a good approximation to Bayesian analysis efficiently, using the Expectation Propagation method. We also address the problems of optimal de...
Maximizing profit using recommender systems
Das, Aparna; Ricketts, Daniel
2009-01-01
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
Bayesian seismic AVO inversion
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
A Theory of Bayesian Decision Making
Karni, Edi
2009-01-01
This paper presents a complete, choice-based, axiomatic Bayesian decision theory. It introduces a new choice set consisting of information-contingent plans for choosing actions and bets and subjective expected utility model with effect-dependent utility functions and action-dependent subjective probabilities which, in conjunction with the updating of the probabilities using Bayes' rule, gives rise to a unique prior and a set of action-dependent posterior probabilities representing the decisio...
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Bayesian Inference for Neighborhood Filters With Application in Denoising.
Huang, Chao-Tsung
2015-11-01
Range-weighted neighborhood filters are useful and popular for their edge-preserving property and simplicity, but they are originally proposed as intuitive tools. Previous works needed to connect them to other tools or models for indirect property reasoning or parameter estimation. In this paper, we introduce a unified empirical Bayesian framework to do both directly. A neighborhood noise model is proposed to reason and infer the Yaroslavsky, bilateral, and modified non-local means filters by joint maximum a posteriori and maximum likelihood estimation. Then, the essential parameter, range variance, can be estimated via model fitting to the empirical distribution of an observable chi scale mixture variable. An algorithm based on expectation-maximization and quasi-Newton optimization is devised to perform the model fitting efficiently. Finally, we apply this framework to the problem of color-image denoising. A recursive fitting and filtering scheme is proposed to improve the image quality. Extensive experiments are performed for a variety of configurations, including different kernel functions, filter types and support sizes, color channel numbers, and noise types. The results show that the proposed framework can fit noisy images well and the range variance can be estimated successfully and efficiently. PMID:26259244
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Thermodynamically consistent Bayesian analysis of closed biochemical reaction systems
Goutsias John
2010-11-01
Full Text Available Abstract Background Estimating the rate constants of a biochemical reaction system with known stoichiometry from noisy time series measurements of molecular concentrations is an important step for building predictive models of cellular function. Inference techniques currently available in the literature may produce rate constant values that defy necessary constraints imposed by the fundamental laws of thermodynamics. As a result, these techniques may lead to biochemical reaction systems whose concentration dynamics could not possibly occur in nature. Therefore, development of a thermodynamically consistent approach for estimating the rate constants of a biochemical reaction system is highly desirable. Results We introduce a Bayesian analysis approach for computing thermodynamically consistent estimates of the rate constants of a closed biochemical reaction system with known stoichiometry given experimental data. Our method employs an appropriately designed prior probability density function that effectively integrates fundamental biophysical and thermodynamic knowledge into the inference problem. Moreover, it takes into account experimental strategies for collecting informative observations of molecular concentrations through perturbations. The proposed method employs a maximization-expectation-maximization algorithm that provides thermodynamically feasible estimates of the rate constant values and computes appropriate measures of estimation accuracy. We demonstrate various aspects of the proposed method on synthetic data obtained by simulating a subset of a well-known model of the EGF/ERK signaling pathway, and examine its robustness under conditions that violate key assumptions. Software, coded in MATLAB®, which implements all Bayesian analysis techniques discussed in this paper, is available free of charge at http://www.cis.jhu.edu/~goutsias/CSS%20lab/software.html. Conclusions Our approach provides an attractive statistical methodology for
Bayesian least squares deconvolution
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Bayesian least squares deconvolution
Ramos, A Asensio
2015-01-01
Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Loredo, T J
2004-01-01
I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...
Bayesian and frequentist inequality tests
David M. Kaplan; Zhuo, Longhao
2016-01-01
Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...
Bayesian multiple target tracking
Streit, Roy L
2013-01-01
This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements
Bayesian Exploratory Factor Analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the...... corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Bayesian Geostatistical Design
Diggle, Peter; Lophaven, Søren Nymand
2006-01-01
locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...
Krejsa, Jiří; Věchet, S.
Bratislava: Slovak University of Technology in Bratislava, 2010, s. 217-222. ISBN 978-80-227-3353-3. [Robotics in Education . Bratislava (SK), 16.09.2010-17.09.2010] Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot localization * bearing only beacons * Bayesian filters Subject RIV: JD - Computer Applications, Robotics
Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;
2015-01-01
A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimenta...
Bayesian Independent Component Analysis
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
Noncausal Bayesian Vector Autoregression
Lanne, Markku; Luoto, Jani
We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution as a...
Loredo, Thomas J.
2004-04-01
I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.
Bayesian logistic regression analysis
Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.
2012-01-01
In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an
HEMI: Hyperedge Majority Influence Maximization
Gangal, Varun; Narayanam, Ramasuri
2016-01-01
In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.
Hierarchical Bayesian sparse image reconstruction with application to MRFM
Dobigeon, Nicolas; Tourneret, Jean-Yves
2008-01-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstr...
Maximal Acceleration Is Nonrotating
Page, D N
1998-01-01
In a stationary axisymmetric spacetime, the angular velocity of a stationary observer that Fermi-Walker transports its acceleration vector is also the angular velocity that locally extremizes the magnitude of the acceleration of such an observer, and conversely if the spacetime is also symmetric under reversing both t and phi together. Thus a congruence of Nonrotating Acceleration Worldlines (NAW) is equivalent to a Stationary Congruence Accelerating Locally Extremely (SCALE). These congruences are defined completely locally, unlike the case of Zero Angular Momentum Observers (ZAMOs), which requires knowledge around a symmetry axis. The SCALE subcase of a Stationary Congruence Accelerating Maximally (SCAM) is made up of stationary worldlines that may be considered to be locally most nearly at rest in a stationary axisymmetric gravitational field. Formulas for the angular velocity and other properties of the SCALEs are given explicitly on a generalization of an equatorial plane, infinitesimally near a symmetry...
Quantum stochastic calculus with maximal operator domains
Lindsay, J Martin; Attal, Stéphane
2004-01-01
Quantum stochastic calculus is extended in a new formulation in which its stochastic integrals achieve their natural and maximal domains. Operator adaptedness, conditional expectations and stochastic integrals are all defined simply in terms of the orthogonal projections of the time filtration of Fock space, together with sections of the adapted gradient operator. Free from exponential vector domains, our stochastic integrals may be satisfactorily composed yielding quantum Itô formulas for op...
Sketching with Test Scores and Submodular Maximization
Sekar, Shreyas; Vojnovic, Milan; Yun, Se-Young
2016-01-01
We consider the problem of maximizing a submodular set function that can be expressed as the expected value of a function of an $n$-size collection of independent random variables with given prior distributions. This is a combinatorial optimization problem that arises in many applications, including the team selection problem that arises in the context of online labour platforms. We consider a class of approximation algorithms that are restricted to use some statistics of the prior distributi...
Bayesian parameter estimation for effective field theories
Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A
2015-01-01
We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Bayesian parameter estimation for effective field theories
Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.
2016-07-01
We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Choquet expectation and Peng's g-expectation
Chen, Zengjing; Tao CHEN; Davison, Matt
2005-01-01
In this paper we consider two ways to generalize the mathematical expectation of a random variable, the Choquet expectation and Peng’s g-expectation. An open question has been, after making suitable restrictions to the class of random variables acted on by the Choquet expectation, for what class of expectation do these two definitions coincide? In this paper we provide a necessary and sufficient condition which proves that the only expectation which lies in both classes is the traditional lin...
Rational Expectations Equilibria: Existence and Representation
Bhowmik, Anuj; Cao, Jiling
2015-01-01
In this paper, we continue to explore the equilibrium theory under ambiguity. For a model of a pure exchange and asymmetric information economy with a measure space of agents whose exogenous uncertainty is described by a complete probability space, we establish a representation theorem for a Bayesian or maximin rational expectations equilibrium allocation in terms of a state-wise Walrasian equilibrium allocation. This result also strengthens the theorems on the existence and representation of...
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
Bayesian Magic in Asteroseismology
Kallinger, T.
2015-09-01
Only a few years ago asteroseismic observations were so rare that scientists had plenty of time to work on individual data sets. They could tune their algorithms in any possible way to squeeze out the last bit of information. Nowadays this is impossible. With missions like MOST, CoRoT, and Kepler we basically drown in new data every day. To handle this in a sufficient way statistical methods become more and more important. This is why Bayesian techniques started their triumph march across asteroseismology. I will go with you on a journey through Bayesian Magic Land, that brings us to the sea of granulation background, the forest of peakbagging, and the stony alley of model comparison.
Optimizing Nuclear Reaction Analysis (NRA) using Bayesian Experimental Design
von Toussaint, U.; Schwarz-Selinger, T.; Gori, S.
2008-01-01
Nuclear Reaction Analysis with ${}^{3}$He holds the promise to measure Deuterium depth profiles up to large depths. However, the extraction of the depth profile from the measured data is an ill-posed inversion problem. Here we demonstrate how Bayesian Experimental Design can be used to optimize the number of measurements as well as the measurement energies to maximize the information gain. Comparison of the inversion properties of the optimized design with standard settings reveals huge possi...
A Bayesian experimental design approach to structural health monitoring
Farrar, Charles [Los Alamos National Laboratory; Flynn, Eric [UCSD; Todd, Michael [UCSD
2010-01-01
Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.
Unified Maximally Natural Supersymmetry
Huang, Junwu
2016-01-01
Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...
Bayesian Nonparametric Graph Clustering
Banerjee, Sayantan; Akbani, Rehan; Baladandayuthapani, Veerabhadran
2015-01-01
We present clustering methods for multivariate data exploiting the underlying geometry of the graphical structure between variables. As opposed to standard approaches that assume known graph structures, we first estimate the edge structure of the unknown graph using Bayesian neighborhood selection approaches, wherein we account for the uncertainty of graphical structure learning through model-averaged estimates of the suitable parameters. Subsequently, we develop a nonparametric graph cluster...
Approximate Bayesian recursive estimation
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Bayesian Benchmark Dose Analysis
Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.
2014-01-01
An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...
Bayesian Generalized Rating Curves
Helgi Sigurðarson 1985
2014-01-01
A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...